# Lens - neume grant milestone update (month 1) # First month target milestones As defined at the project kick-off. 1. Generalise neume architecture to facilitate the indexing of Lens - lead @il3ven 2. Move neume to new database structure - lead @il3ven 3. Initial work into indexing based on an array of pre-determined wallets - leadĀ @neatonk The first two milestones in this list will result in shipping architecture updates to the neume codebase. The third target is a research stream that will result in a "neuIP" proposal, to be developed through to shipping as part of the subsequent month'sĀ milestones. # Update All communication within the neume community is held through [github discussions](https://github.com/orgs/neume-network/discussions) and our weekly open office, Open Office notes are available within the corresponding Open Office thread in our github forum. ## Workstreams ### 1. Generalise neume architecture to facilitate the indexing of Lens We have made good progress on this stream, writing strategies for Lens and testing how this work with the current neume architecture. This has resulted in an updated schema, as defined within this [PR](https://github.com/neume-network/schema/pull/60) which has been merged into the codebase. Inevitably we have met some expected challenges with regards to scale, there are of course many more records coming through from Lens than the other contracts that we have been previously indexing. This has triggered an [interesting discussion](https://github.com/orgs/neume-network/discussions/24) within the community around the best way to focus the indexer and filter results. The discussion has culminated in three potential options: - Crawl for known frontends using Lens API. This is the fastest but comes at the cost of decentralisation. - Crawl for known frontends using onchain events. This is slow (initial crawl will take 5-6 days to get upto speed). - Crawl for audio files. This is slow too. Our estimate is that there isn't much difference between 2nd and 3rd with respect to speed. The 3rd option is better because it will cover more area and opens up the potential for future collaborations with front ends. So, we will start from the 3rd option and move towards 1st if needed. ### 2. Move neume to new database structure After much discussion about the right database structure to output crawled data to, we have moved neume from levelDB towards SQL, as can be seen in this merged [PR](https://github.com/neume-network/crawler/pull/9). In addition to this, we have started to look forwards towards the next version of neume where by we want to make it easy for new users to be able to run a node in an efficient and effective way (see discussion [here](https://github.com/orgs/neume-network/discussions/29)). This is still an area of active research as we progress neume towards its future network state. ### 3. Initial work into indexing based on an array of pre-determined wallets Starting out with the goal described, this stream has expanded into a neuIP to create a more generalised methodology for neume users to be able to index only for the data that they require (rather than everything). The goal of this month's work was to conduct research in this area and then propose a neuIP, which can be seen here: [neuIP-5](https://github.com/neume-network/neuIPs/pull/15). Next steps for this workstream are likely to include: - Create a proof-of-concept implementation of the crawler with the proposed changes. This could highlight one or more of the use cases proposed in the neuIP (e.g. derived subsets focused on a block range or set of wallet addresses). - Introduce a second neuIP which builds on this by using Merkle Patricia Tries and subset schema identifiers for efficient subset reconciliation in neume. A specific outcome will be an updated schema and reference implementation which can be used to generate subset schema IDs and derive subsets via constraints. This would also aid discussion by making the proposal more concrete. An additional area that we are keen to add to the neuIP-5 discussion are ways to link datums across related schemas, like tracks and NFTs. Alternatively, related datums can be embedded to simplify access. Ideally we could support both use cases and let node operators choose the trade-offs that make sense for their use case. # Next month milestones 1. Get the Lens crawl up and running 2. Onboard front ends in the Lens community to neume 3. Refine and further develop the specification of neuIP-5