Kevin(host), Vitalik, Sejal, and Devansh
https://www.youtube.com/watch?v=ygaEBHYllPU
Methodology: Feed Youtube subtitles to ChatGPT
Hello and welcome to the Green Pill Network podcast! If you're just joining us, we're building a coordination network society of thousands of hackers, streamers, and doers focused on using crypto to bring positive-sum digital systems to the world. This podcast features the people who are making it happen. If you want to learn more, visit our website at greenpill.network, where you can download the Green Pill book for free, join the Discord, or become a member of your local Green Pill Network chapter. Now, on with the show!
This is part two of my special series with Vitalik Buterin about public goods funding at the end of 2025. In part one, we talked about Vitalik's broad view of the design space as we transition from 2024 to 2025, discussing what Builders should focus on. In this second episode of the series, we dive into deepfunding.org, a new experiment that the Gitcoin ecosystem is building with Vitalik's guidance. I should clarify that it's not just Gitcoin but the general public goods funding ecosystem coming together.
There are many teams contributing to this ecosystem, and one of the most amazing developments has been the creation of "money Legos," which allow these teams to collaborate and implement Vitalik's vision. In this episode, we discuss his vision for deep funding. Here's a quick TL;DR before we get into the episode: deep funding uses a dependency graph and a market of AI or human allocators, guided by a spot-checking jury, to pay Open Source contributors—either directly or indirectly—upstream of a project or outcome that funders care about.
So, we're moving beyond surface-level funding, which only reaches projects that people already know about. Instead, we’re looking at dependency graphs and using AI agents to determine how projects depend on one another. This is a bold experiment in public goods funding, and I’m thrilled to unveil it today on the Green Pill podcast. The next voice you’ll hear will be my own, introducing Devansh as my co-host for the episode. Then, we’ll jump into our conversation with Vitalik about deepfunding.org. I think you’re really going to enjoy this episode.
[Music Plays]
All right, here we are! Devansh, do you want to start off by introducing yourself, and then Sejal can go next? After that, we’ll dive into our questions for Vitalik.
Devansh: Thanks for having me on. My name is Devansh, and I work in Arbitrum DAO governance, as well as general DAO governance and public goods. I was originally an investigative reporter, and I think values like censorship resistance and speaking truth to power really resonate within the blockchain space. That’s what drew me in.
Sejal: Hi, I’m Sejal, and I work as a Grants Innovation Specialist at Gitcoin. My role involves designing grant programs with ecosystems. I led the first EIP PGS pilot with Gitcoin, Protocol Labs, Celo, and Pocket. I’m particularly interested in exploring innovative ways of capital allocation.
We’re here today to talk about deep funding, a new experiment we’ve been working on. I’ll sit in the background as Sejal and Devansh ask Vitalik about deep funding and the inspiration behind it. You two take the reins, and I’ll jump back in as we start wrapping up in 15–20 minutes. Go ahead.
Devansh: Thanks. Let’s start with the first question. Vitalik, this is clearly something you’ve been thinking about for quite some time. What is the problem that prompted you to explore this idea? Also, could you explain the mechanism itself and why you chose the name "deep funding"?
Vitalik: To me, deep funding has two core insights that distinguish it from many of the previous funding models.
One of the core mechanisms of deep funding involves the idea of being "deep" in two senses: deep as in a deep graph and deep as in deep learning. Let’s explore both of these.
When we talk about the deep graph, the core mechanism revolves around a dependency graph—a graph of projects and how much value each project provides to other projects. This graph can have multiple levels. The idea is to shift from a general frame of public goods funding to one focused on dependency graphs. This concept has gained traction in Commons and public funding discussions, particularly over the last year.
The reason this approach seems intuitively better is that public goods funding often deals with abstract, high-level questions like, "How much value does this thing provide to humanity?" For example, trying to determine the dollar value that the Python ECC library contributes to Ethereum or humanity is incredibly challenging. However, questions like "How much credit does A deserve for B?" or "How relatively valuable is A for B compared to C for B?" are more tractable. By breaking down these intractable questions into more manageable ones, you can get higher-quality answers.
There’s psychological research supporting this. People are generally more irrational when dealing with abstract or far-mode concepts. For example, when asked how much they’d pay to save 2,000 birds versus 200,000 birds versus 20,000 birds, the average answer is often the same—$80—demonstrating scope insensitivity. Public funding can suffer from similar irrationalities when tied to large-scale, abstract challenges. Dependency graphs aim to avoid this by breaking funding into smaller, more comprehensible components.
The second mechanism involves the "deep learning" part. In my post on InfoFinance, I discuss distilled human judgment, proposing that we use AI as an accelerator. Essentially, the idea is to pair a credible, trusted human mechanism—one that is expensive but legitimate—with AI to create a system that is both efficient and almost as reliable as the human mechanism. Think of the AI as the engine and the human-based mechanism as the steering wheel.
Here’s how it works: you fill in the dependency graph, and a human mechanism spot-checks the graph in a few random places. The final output is based on the AI’s answers, weighted by how closely they align with human preferences in the areas the humans reviewed. This could be structured as a prediction market with AI participants, or a simpler system. You can even draw an analogy to reinforcement learning with human feedback (RLHF). The core idea is that AI provides scalability, while the human mechanism, though more expensive per query, provides the direction.
Sejal: What would be unintended consequences of deep funding?
Vitalik: One notable aspect of deep funding is that it introduces a contest-like structure. People use AIs to generate large-scale graphs with allocations, essentially competing to provide answers that closely align with what the human jury ultimately selects. If you perform well in the competition, you earn a larger reward. Structured competitions like this can be highly valuable; they focus attention and can drive innovation. For instance, when I was in high school, I participated in programming competitions and found them incredibly engaging.
However, every mechanism with adversarial incentives risks unintended outcomes. If I were to predict where such a system might break, one possibility is that even short-range spot-checking—assessing how valuable A is for B—might prove difficult for humans to answer reliably. Another concern is that the complexity of the dependency graph could become too challenging to navigate. When you introduce AIs, there's always the potential for people to game the system. For instance, someone might manipulate the system by making 500 superficial commits instead of five meaningful ones.
The theory is that human spot-checking judges will catch these superficial attempts, prompting people to create more sophisticated models that account for them. But the question remains: will this balance actually hold? That’s something we’ll need to observe and evaluate as the system evolves.
Devansh and Sejal: Another potential concern lies in the dynamics of software development itself. We’ve seen examples of maintainers facing significant stress, such as when vulnerabilities are intentionally introduced. This could lead to developers avoiding dependencies or altering their internal processes to avoid scrutiny. How will these changes affect the broader ecosystem? There’s a risk that developers might selectively choose dependencies or change their practices in ways that are counterproductive.
Vitalik: If I had to summarize, we’re not comparing against perfection; we’re comparing against the alternatives. In private markets, for example, the decision to use software often depends not only on its quality but also on its terms and monetization strategy. Addressing these kinds of issues is incredibly complex. Another area of potential failure lies in peer-to-peer or academic collaborations, which may also encounter challenges as they interact with this funding model.
Peer review and citations, along with citation rings and related issues, are relevant concerns when considering unintended consequences.
Devansh: We've seen cases where governments have used AI models with disastrous outcomes. For instance, in the Netherlands, the Child Welfare Department used an algorithm that denied benefits and demanded repayments, leading to the resignation of the Prime Minister. How is this mechanism different from those initiatives where governments use AI for filtering, eligibility, and similar purposes?
Vitalik: This question is insightful because it highlights the importance of both the mechanism itself and our expectations of it. Our political culture—possibly global but particularly in the West—tends to associate the private sector with risk-taking and the public sector with guarantees, certainty, and procedural fairness. Logically, these associations don’t have to coincide, but culturally they do. For example, if someone like Elon Musk makes an arbitrary decision to give $40,000 to one influencer and not another, people accept it. However, if a government denies benefits instead of granting them, it’s treated as a much bigger issue.
Welfare payments, in particular, require caution because their goal is inherently risk-averse: ensuring people have access to essentials like healthcare, food, and basic living expenses. Welfare cannot operate like startup returns. But when it comes to public goods funding, the goal is different. Here, stability and guarantees are not the primary focus. In fact, there’s an argument to be made that our civilization is missing precisely the opposite—a system where people can take risks, achieve something highly valuable, and potentially be rewarded handsomely.
One part of the answer is that culturally, we need to be more willing to accept risks in public goods funding. Another part involves recognizing the context in which we use these mechanisms. Some people prefer stability—like a tenure-like job—because of psychological reasons, poverty, family obligations, medical conditions, or other factors beyond their control. Others thrive in high-risk, high-reward environments. Both preferences are valid, and we need systems that cater to each.
Additionally, we must accept that no single mechanism can serve everyone. For example, Protocol Guild operates more like a tenure-like structure. It offers stability and won’t suddenly stop funding due to an arbitrary glitch or an AI malfunction. I believe in creating a diversity of mechanisms tailored to different goals and groups of people. Together, these systems can create a landscape that is diverse, balanced, and fair without placing all responsibility on individual components.
Sejal: What are additional use cases we can envision for deep funding beyond software development and GitHub?
Vitalik: For example, art is an intriguing possibility. In the age of AI, art has complex dependency graphs. Through analysis, we might probabilistically determine, for instance, that a piece of art was "2.3% inspired" by Game of Thrones, even if that influence stemmed from AI models that inadvertently incorporated copyrighted material. While stopping such influences outright is unrealistic, building a structure that traces these intricate credit and dependency relationships and compensates contributors accordingly would be remarkable.
The research and idea space is another compelling use case. Contributions in these areas are often less tangible but critical. Take the invention of AMMs (Automated Market Makers) as an example. The history involves contributions from many individuals—Hayden, myself, Bor, Martin Köppelmann, Robin Hanson, and others—all of whom were influenced by various predecessors. AI’s ability to make the illegible legible could help untangle these webs of influence, just as it has in other contexts. For example, in my earlier post, I highlighted AI-generated portrayals of Bitcoiners and Ethereum supporters, each with distinct, recognizable elements reflecting their cultural and ideological traits.
Other potential media, like EIPs (Ethereum Improvement Proposals), could be considered. However, funding EIPs directly is risky, as it might compromise the neutrality of the protocol development process or incentivize unnecessary proposals. These challenges underscore the importance of caution in deciding what and how to fund.
Devansh and Sejal: Science is another area with exciting potential. Dependency tracking and citation graphs could offer new ways to visualize and support the scientific ecosystem. The primary hurdle here is the lack of structured data. Platforms like GitHub have well-organized data, but many other domains do not, making it a significant challenge to extend this approach more broadly.
Devansh: Moving on, how can people get involved in this pilot? Since this is a new mechanism, there are different ways for people to contribute and help make it successful.
Vitalik: The mechanism has two main components: the "engine" (AI) and the "steering wheel" (human judgment). The steering wheel involves human decision-making, particularly in conducting random spot checks on specific questions, which requires nuance and careful consideration.
The steering wheel tasks, involving human judgment and random spot checks, will undoubtedly evolve and refine over time. These tasks are ideal for individuals experienced in both on-chain and off-chain funding processes. On the other side, the engine aspect is where participants tackle the problem of filling a graph's edges with values between zero and one, ensuring that the sum of incoming edges for any given node remains less than one. While mathematically simple in its problem statement, the challenge lies in aligning this process with the ground truth signals derived from spot checks.
Think of this as a fun programming competition. Essentially, participants create bots to "play a game," similar to programming challenges many of us enjoyed as teenagers. The incentives will include programmatic prizes for achieving high scores and softer prizes for public contributions, such as creating interpretable models, data visualizers, training tools, or utilities for gathering relevant information. The goal is to foster collaboration and openness, avoiding a fully opaque competition model. Contributions that support both the mechanism itself and other contestants will be highly valued.
Devansh & Sejal: This approach starkly contrasts with the example of government AI use, where processes are often opaque, and power is centralized. Here, everyone can participate, and even if mistakes are made, the community can step in to improve processes through contributions, such as GitHub commits. To encourage engagement, there will be specific calls to action on the deepfunding.org website. Submissions of models are encouraged, and the team hopes for participation from academics, community members, and anyone interested in contributing.
Devansh: As a final fun question, do you have a favorite underdog public goods project that deep funding could help uncover and support?
Vitalik: One example I keep revisiting is the Optimism retroactive funding round that supported Keccak—the developers who invented the hash function foundational to Ethereum and used extensively today. This work, deeply rooted in cryptography, extends far beyond the cryptocurrency world and demonstrates the importance of funding low-level, fundamental contributions to cryptography and public goods. Projects like these, which provide value across diverse domains, are precisely the kinds of efforts that deep funding aims to recognize and support.
It was amazing that we were able to fund something like Keccak. One of the things you sent me, Devansh, was that famous internet meme of software as a huge tower of different blocks, with one tiny piece at the base maintained by some random guy in Nebraska. Those kinds of projects—small but critical—are exactly what I’d be excited about unearthing and supporting.
Kevin: Before we wrap up, I just want to thank all of you for being my ride-or-die group of mechanism and pilot designers. I’m really excited about this program, and I want to thank Vitalik for being on the episode.
Sejal: One thing I’ve personally felt during this pilot—and Devansh, you and I have discussed this—is the incredible pace at which everything is happening. How do you think we can replicate this speed with other mechanisms? It’s clear that this level of progress is something we need in more areas. I’d love to hear your thoughts.
Vitalik: I think one thing we need is more financial resources. In addition to funding rounds aimed at supporting specific spaces, we should allocate more resources for experimental rounds, where the primary goal is to gain experience and learn from running those rounds. Beyond financial resources, we also need more intellectual capital—creating projects and opportunities that people are genuinely excited about, want to participate in, and can contribute to meaningfully.
That’s one of the theoretical benefits of democratic and participatory mechanisms. When you give more people the opportunity to engage, they not only participate but also contribute to improving the mechanism itself. Encouraging this involvement and talking about funding as a first-class priority within the ecosystem is something I’m aiming to focus on in 2025.
Host: Amazing. Alright, this is my second attempt to close out the episode. Vitalik, thank you so much for joining us. Sejal and Devansh, thank you for being my ride-or-die collaborators on this pilot. I’m really excited about the future of public goods funding as we head into 2025. Thanks, everyone, for your time, and we’ll see you at deepfunding.org!
Welcome to our post-episode discussion about deepfunding.org and our conversation with Vitalik Buterin. Sejal, Devansh, what were your thoughts on the episode?
Devansh: I think Vitalik covered such a breadth of topics. It’s hard to believe it was only 15 minutes! He broke down the mechanism into two elements—dependency graphs and spot checks—which I found incredibly useful. Then, he explored applications across government, art, and industry. It’s such an exciting new approach, and I particularly liked how he reframed public goods funding through the lens of dependency graphs. Many feel that public goods funding as a term has become overused, so this new perspective feels refreshing and promising.
Sejal: For me, I love the direction we’re heading. I’m most excited about tapping into communities outside of Web3, such as AI communities. That intersection is something we’ve already discussed a lot, and we’re seeing the beginnings of that wave in Web3.
Bringing new communities into this open ecosystem is something I’m really excited about. The openness of the competition, with no gatekeeping, allows anyone to submit a model. I’m particularly interested in exploring how Vitalik mentioned that funding should be treated as a first-class priority in ecosystems next year. With some of the work we’re doing in Alo, I’m looking forward to seeing how we can build on that.
Kevin: One of the things that strikes me as truly remarkable is the increasing velocity of mechanism design and its deployment. I’ve been in this space for seven years, and looking back, the pace at which ideas are moving from concept to market has only accelerated. For example, I asked Vitalik to do this podcast just two weeks ago for my annual interview about what public goods builders should focus on. In that short time, the deepfunding.org experiment has come together, and we’re already announcing it as a lean pilot. This speed to market is incredibly exciting, especially when you consider how we’re leveraging Vitalik’s mechanism design ideas alongside Gitcoin’s resources.
What excites me even more is how this increasing pace allows us to explore the design space in new and dynamic ways. Deep funding itself is exciting, but it also showcases the broader acceleration in Web3 experimentation. We’re moving in lockstep toward more innovative ways to allocate public goods funding, and that’s thrilling to witness.
Sejal: Another element that’s been phenomenal is the network-first approach we’ve adopted. This approach emphasizes working collaboratively with partners like Devansh, Sejal, and other organizations. The way this has come together is nothing short of incredible. The speed we’ve achieved is a direct result of this approach. For anyone with a big idea, we want to collaborate and make it happen.
Kevin: For those unfamiliar, a network-first approach means assembling resources on demand by partnering with our network of builders and mechanism designers. This is in contrast to relying on a centralized dev team with a rigid roadmap. At Gitcoin, we’ve embraced this direction for 2025, allowing for greater fluidity in experiments like deep funding. If you’re part of an ecosystem with a bold idea, Gitcoin is here to collaborate and bring it to life.
Devansh: To add to that, the success of this pilot wouldn’t have been possible without organizations like Gitcoin, which has spent years thinking deeply about mechanisms. They’ve been an exceptional partner in building this. Sejal has been fantastic at spearheading efforts, and we’ve also collaborated with Open Source Observer, which specializes in analyzing GitHub data and dependency graphs between code repositories. Additionally, hosting the competition on Kaggle—a leading data science platform—has allowed us to engage with a broad range of data scientists and developers.
The network-first approach has clear advantages because we’re collaborating with organizations and teams that have already dedicated years to solving specific problems. For example, Drips has developed a way to fund people working on GitHub who don’t have wallets, allowing them to claim funds whenever they’re ready. Pairwise has been working on comparison and voting mechanisms, essential for spot checks, and has already learned from mistakes we don’t need to repeat. These collaborations save us from reinventing the wheel and provide a robust foundation to build on.
Kevin: One concept Vitalik introduced in part one of the podcast was the idea of public good Legos. This includes quadratic funding, sybil-resistance mechanisms, retroactive funding, Futarchy, and tools like Open Source Observer (OSO) that provide data. These "money Legos" are public goods that can be combined in countless ways. For instance, tools like Radical, Drips, Superfluid, and Giveth are all part of this ecosystem. The permutations of these Legos are what enable the rapid pace of building in this space.
If there are n Legos, there are n² ways to combine them. This means a Web3 hacker working on public goods funding can now achieve in a weekend what a centralized government couldn’t accomplish 10 years ago with $100 million and a thousand engineers. The exponential innovation made possible by these Legos, combined with the network-first approach, is what will make deep funding a successful pilot. It’s this synergy that will accelerate our exploration of the design space for public goods funding in 2025.
I’m so grateful to Vitalik for not only providing the vision but also supporting pilots like this to push the space forward boldly. With only a few minutes left, let’s talk about the details of deepfunding.org and direct the audience to the platform.
Devansh: Deepfunding.org is the pilot experiment. The competition officially opens on December 12th, and here’s what to expect:
On January 20th, additional data will be released from the JWY dataset for those building deep learning solutions to fill in the weights. This staged approach gives participants a starting point and allows for incremental progress on the challenge.
The timeline for deepfunding.org is as follows:
Sejal: Additionally, we’re launching a primer series to complement this pilot. The first session, focused on Shapley values, will take place on December 17th. The goal is to build a strong academic foundation for what we’re creating. Joining the Telegram group via the website will provide all the details about this series and keep you updated on everything happening.
Call to action time! If you’re interested in joining the Deep Funding ecosystem, visit deepfunding.org and check it out today. If you’d like a custom mechanism built for your ecosystem, get in touch with Sejal, Devansh, or myself.
And don’t forget to join us at Shelling Point in Denver! You can learn more at shellingpoint.gitcoin.co.
Thank you, Devansh and Sejal, for being my ride-or-die collaborators on this pilot. I’m here for deepfunding.org, and I’m excited to see what we can achieve together.
Peace!
[Music]