As we built allo-scan, a block explorer aiming to be the single source of truth for Allo v2 protocol across all networks. We wanted to explore various indexing solutions available in the web3 domain. This article is meant to capture all the solutions we had explored and what we ended up picking and why. To share a bit of context on why this was important was when Gitcoin built allo-v1, we defaulted to graph to index all our data. Since allo-v1 was deployed on multiple chains, a subgraph had to deployed for those chains and based on the chain, we had to explore different hosting options within the subgraph |Hosting Type | Description | Network | |-----------------|------------------------|---------| | Hosted Service | Hosted by subgraph | fantom, optimism | | Subgraph Studio | Rely on other indexers | mainnet | | Self Hosting | Hosting your own node | public goods network | **The Wins:** - Write once and deploy once for each chain - Being able to write tests to run locally **The Tricky Parts** - Running the graph locally required a lot of computer resources - We quickly realised that having a dapp (grant-explorer) which would index data from the all the graph endpoints would be very slow - Cost of reads would spike up during rounds. - Our backend services, which would rely on the graph to fetch all the allocations/votes would return in-consistent results based on the indexer. This was only for the mainnet graph which was deployed using the subgraph studio. This got us into a few bugs cause we'd have 5k+ events and since the queries are paginated, it was tricky to find this - Self Hosting was tricky to setup but post that it was smooth sailing. Debugging also came with it's own difficulties. **Exploring Spec** While building allo-v2, we wanted to explore other indexing solutions to see if we could have a smoother development lifecycle and we were introduced to https://spec.dev/. While it's still in it's early stages, we realised building and deploying on spec was a lot simplier than the graph primarily and here is an overview of what that looks like - You run a few cli commands to create a group and add your contracts to them. Having a config file which we could edit would have been cleaner but still is easy to set up - Here is where spec shines IMO, being able to support multiple chain and having a single deploy commmand to control all of them was a big win for the team. Unlike the subgraph where we'd need to have multiple graphs to be deployed. Having everything stored together and filterable by chainId was something which would make the dapp experience faster. Analyszing data becomes easier. The end results can still be achieved via the graph but the steps to get there felt ridiculously easy. - Spec auto indexes events on the added contracts, which makes it easier to get running right off the bat. Here are the events auto indexed on allo-v2 and was zero code https://spec.dev/allov2/events - Like the graph, being able to write custom live objects to accomodate business logic / structuring the data more suited for the dapp was quite similar to the graph. Here are the live objects https://spec.dev/allov2/live-tables. You can do things like resolve a IPFS hash and actually index the content of it (this is quite powerful. while graph does this, there were a lot of intermittent failures) - One selling point for spec was the fact that the data we've published could be used by other members in the community and they could make build their own database using the events and live objects we had published. While it's too early to see how this helps us. As allo-v2 gains more adoption, this would be a powerful tool to allow folks to build on top of what we have already indexed. You wouldn't have to fork our spec repo and redo everything. **The Wins:** - Responsive Team. Being able to give feedback to the spec - Auto-indexed events + live objects makes setting up data availability a lot easier - User can consume what we've built and publish their own spec table as opposed to relying on us to make the changes to accomodate a usease which they find useful - Having all the data stored on 1 table across chains, for a product like allo-v2 -> this is a powerful tool which would help our users - Setting up custom filters, functions etc can be done easily with psql functions ( it does require your team to be familiar with psql) - Testing it locally is fast and simple - Flexibility to create migrations for custom filters **The Tricky Parts** - Maintaining which contracts are indexed are done via CLI - Writing tests for the live objects isn't supported in spec at this point in time - Publishing requires working with the spec team and cannot be done by the actual owner That said, spec is still a product which is rapidly evolving so the tricky parts I've listed out are being improved on and I'm excited to show the solutions spec puts forward to address them. The graph while still offers a solution to our indexing needs often has our team overwhelmed and spec shines makes the development/maintainance easier If you want to explore more on how we are using spec to power our data needs, here are a few links to get your started: - [allo-v2-spec](https://github.com/allo-protocol/allo-v2-spec) : where all the live object / custom filters are defined - [allo-scan](https://github.com/allo-protocol/allo-scan): a block explorer for allo-v2 which is powered by spec - [SeaGrants](grants-app-smoky.vercel.app): a micro grants pool dapp which allows you to run grant rounds. The data here is again indexed via spec