# Arbitrum Grants Powered by Impact Evaluations This proposal presents a novel funding model for Arbitrum Grants that aims to create a scalable, plural, and transparent system for grant distribution that fosters innovation and rewards hard work. ### Key Points * Size funding rounds based on the L2’s economic activity; hold larger grant rounds during periods of lower TVL and transaction activity * Divide each round into 3-5 funding pools for different prioritized ecosystem growth areas * Require projects to submit structured data that facilitates permissionless observation of their activity and impact * Incentivize impact evaluators to review structured project data and submit a recommended percentage of funding for each project * Shift the role of delegates from voting on projects to voting on impact evaluators’ scoring of all eligible projects ### Round Sizing Formula Size grant rounds based on ecosystem economic activity over a time period, i.e., a function that considers $ARB’s total value locked and network transaction activity. More $ARB should be granted during bearish or low activity cycles, incentivizing and supporting hard work during challenging times. ### Funding Pool Parameters Establish distinct funding pools to support development and growth at different network levels, e.g., separate “infrastructure” and “onboarding” pools. We propose leveraging Gitcoin’s Round Manager, with an initial 3-5 funding pools, including one for individual contributions. ### Project Eligibility Requirements Require grant applicants to provide structured project data that facilitates [permissionless](https://twitter.com/carl_cervone/status/1658080750011310082?s=20) observation, including Github repos, social media handles, project wallets, and deployed smart contracts. We propose using Gitcoin Passport to prevent application spam, and the [hypercert schema](https://hypercerts.org/docs/whitepaper/impact-space) to enumerate contributors, work scopes, and work periods. By implementing these standardized measures, we enhance transparency and evaluation of project eligibility. ### Impact Evaluation Allocate a fixed percentage of each funding pool for decentralized impact evaluators. An “[impact evaluator](https://research.protocol.ai/publications/generalized-impact-evaluators/)” takes as inputs a set of eligible projects and returns a recommended percentage of funding for each project. Evaluators are expected to rely on open data feeds to assess the eligible projects and submit auditable scoring functions alongside their recommendations. We propose dedicating 10% of funding initially to encourage building and innovation in this area. ### Funding Decisions Delegates make funding decisions by voting on impact evaluators instead of projects. This elevates Arbitrum delegates to address the meta problem of signaling which types of impact they value the most. As the ecosystem grows and diversifies, the work of impact evaluators becomes increasingly valuable to delegates’ network and social capital, resulting in a more scalable and effective system.