# Reply to Nlnet questions ipfs-search
Dear Jos van den Over
Thank you for your questions about our NLnet application. We have tried to answer each in depth.
The past year was a challenging one for many, including for the ipfs-search project. The COVID-19 pandemic and related crises impacted our team members, disrupting our work and ability to collaborate.
Despite this, ipfs-search has developed a lot in 2020. We are close to making a fully functioning version. We hope that with a big push in the next year or so, the project will be self-sustainable.
We have completed most of our goals for 2020. Only a few points are yet to be finalized, and we have reserved funds from the previous grant to complete those tasks.
We will discuss the outcomes more in our answer to your second question below.
We have attached a detailed budget, discussed in our answer to question 1.
## **1) Why the high cost**
"You requested 90000 euro. We struggle to see the need for such a high budget
considering the described deliverables. E.g. you request **14400** euro for a
thumbnail service. Why is the cost so high? What rates did you use?"
#### Answer
We ask for a high budget, but one that reflects the ambition of the project. The scaling behaviour, the fast growth of the network, and the size of the data set, means that the problems we are trying to solve should not be underestimated. They are highly complex and would be hard to tackle even for a full size research team.
We have done a review of our goals for the next year, and made a detailed budget for the next grant period. Here we have divided the goals into work packets and given these individual work estimates.
The budget can be found attached.
We now estimate a total of **1688 work hours** by the team. Based on a **60 € rate per hour**, this gives an **expected cost of 112464 €**.
We of course understand if this is outside the scope of the support NLnet can provide. We have therefore made this overview of which of the project goals we consider essential and which we consider to be optional.
**Essential for continuity of service:**
* Scalability of architecture
* Overall improvements and maintenance
**Optional:**
* Experiment with distributed crawling and search
* Domain-specific search and Thumbnail service and API (depended on by the former)
* Metasearch support (searchx)
We hope that this overview combined with the attached budget can form the basis for further discussion with NLnet.
## **2) Elaborate on outcomes**
"Regarding the previous project, could you elaborate on the outcomes, especially the tasks concerning distributed search
(which is a significant part of this project)? The roadmap in the project
readme does not detail these outcomes as far as we can determine."
#### Answer
2020 was an abnormal year, also for the ipfs-search project. In various ways, project members were impacted by the global crisis that was (and is) the COVID-19 pandemic.
The project leader, Mathijs de Bruin went trough some issues with his living circumstances: the problems related to some renovations gone wrong turned out to be a very disruptive and time consuming.
Despite these setbacks, we have made real progress on ipfs-search in 2020. The outcomes of our work are discussed below, as well as the milestones from our 2020 MoU that we have yet to complete.
Finally we discuss the results of the research into distributed search. We are working on compiling the results of the research into a full report.
### Outcomes 2020
In 2020 we have achieved the following
* Observability: tracing and metering of the live crawler.
* Sniffer rewrite: reusable multi-headed DHT-sniffer.
* Reusability/modularity:
* Separate crawler, deployment, API, sniffer, frontend repositories.
* Complete rewrite of extractor asynchronous I/O.
* Rewrite of API, JS client and OpenAPI specification and documentation.
* Crawler refactor: interface abstraction, 70% test coverage, comprehensive to GoDoc documentation.
* Upgrade and reindex Elasticsearch from 5.x to 7.x.
* Increase in index rate from 1 hash per second to 4.
* Redesign of frontend towards a VueJS-based implementation.
### 2020 Milestones to be completed
Due to setbacks discussed above, we are still working on implementing the following milestones from the previous NLnet grant:
* Deployment and stress-testing refactored crawler and rewrite of metadata extractor.
* Wrap up and launch new frontend.
We are expecting to complete these in short order.
### Distributed search research
The research goal for 2020 was to uncover a way to make ipfs-search a distributed and decentralized search experience.
We investigated the following distributed technologies as possible means to make that happen:
* Blockchain
* Hashgraph
* DAG
* Holochain
We also researched the following technologies as ways of solving the trust problem in decentralized search:
* Zero knowledge proofs
* Smart contracts
* Voting models (including Byzantine Fault Tolerant versions)
* Gossip about gossip (including Apache Gossip, gossip-python, and Smudge libraries)
We researched the current incarnations of the distributed web:
* IPFS
* BTFS
* FileCoin
Researched architecture for making the crawler distributed using the following three components:
- An _Overlay Network Layer_ responsible for formation and maintenance of a distributed search engine network, and communication between peers. These can be unstructured or structured networks. If a network is not scalable, a super node architecture can be used to improve performance, hence a client must have support for flat as well as supernode architecture.
- A _Peer and Content Distribution Function_ can determine which clients to connect with. Each client could have a copy of this function. Hash list and content range associated with a client can change due to joining/leaving of nodes in the network. The function will distribute hashes to crawl as well as content among peers and makes use of the underlying distributed web network to provide load balancing and scalability and takes proximity of nodes into account. Initially we would use a static distribution function, hash list and content range assignment functions can be hash functions.
- A _Crawler_ that downloads and extracts distributed web objects, and sends and receives data from other peers using the underlying network.
We looked into into how we can create "good enough" indexing:
* Using *single pass in-memory sorting (SPIMI)*.
* Using distributed indexing algorithms using MapReduce and Hadoop.
* Using the *Bulk Synchronous Parallel (BSP)* computing model.
* Working with [The Graph](https://thegraph.com/), a decentralized index that works across blockchains.
### A possible solution to distributed search for the distributed web
We have sketched the following possible solution to distributed search:
- Provider nodes that wish to participate, parse and index only the files they have added to a dweb (DHT hashes) and that have world file permissions.
- This local index is put on a IPFS cluster.
- A query can use the distributed index.
- Initial search functionalities are basic boolean search to begin with.
- Settings functionality anticipates tuning
- In the future, one can add to the search engine functionality with extensions.
### Conclusion
While fully distributed search has seemed close for a long time, nobody has been able to make the final leap. It seems to be perpetually "a few years off". This is because there are real, hard problems that still need to be solved, including the problem of trust in a decentralized architecture.
We want to progress towards this goal, but it is one we believe to be further off than it sometimes seems.
That is why we want to make ipfs-search into a working, self-sustainable project that can deliver value to users, even as we progress towards fully distributed search.
There are many smaller challenges to solve, not only the major one of making search fully distributed.
The IPFS needs a working search engine now. We work towards this goal with the resources available to us. We have headway towards distributed search, but to have a chance of realizing it in the short term, in addition to the smaller challenges, we would need a lot more resources than we currently have.
## **3) 'Innards' not very visible. Is there a blog or similar?**
"Currently the innards of ipfs-search project is not very visible in a user-
friendly way. Have you published blogs on the progress and found
contributors? How many users of the code and the service do you currently have?"
#### Answer
In the previous grant period, our focus was on developing a functioning service. We have primarily let our work be visible through our github changlog (available [here](https://github.com/ipfs-search/ipfs-search/commits/master)).
In 2020 we went from a prototype (alpha version) to a proof of concept (beta version). This work included large scale refactoring and a reworking of the architecture. With the large flux in the structure, it were not able provide detailed overviews of the many changes.
Since we are now moving to a new phase we will have sufficient stability to start communicating project developments in detail.
We have already taken steps towards creating a centralized repository for publishing our technical documentation on https://readthedocs.org/.
The idea of creating a blog is a good one. We are currently in the process of making one where we will publish information about the project, as well as relevant articles and research.
We have not been monitoring the number of users on the service yet, but are looking into how we can measure usage in detail. Despite the lack of precise data, we are confident that we have seen an uptake in usage in interest since the restructuring and refactoring we made in 2020.
Currently the projects main GitHub repository has over 500 stars (514 at the time of writing) and 62 forks, which we take to mean that people are playing around with the code and looking into how to use it.
## **4) Can ipfs-search become a 'reproducible index factory'?**
"When this project is complete, can ipfs-search work as a 'reproducible index factory' where recipes consisting of a corpus description (content hashes) with a indexing task (filtering, stemming, index format) can create shareable index that others can reproduce and use independently with low overhead? Created indexed could be put on ipfs too, of course."
#### Answer
We would say that it already can function as a 'reproducible index factory'. Our deployment automation is Open Source, deployed via Docker. Our index is published daily on IPFS as an Elasticsearch snapshot. More info is available [here](https://github.com/ipfs-search/ipfs-search/blob/master/docs/snapshots.md).
We have been actively seeking out other parties who would be willing to proactively index new content.
The project is highly specialized, with significant technical skills and knowledge required. We feared this would limit the number of potential collaborators, but we have seen significant interest from both academia and industry in adapting parts or all of the infrastructure.
We have already finalized a collaboration with a academic research team, the results of which are currently awaiting publication.
## **5) What will be the file format of the indexes**
"What file format will you use for the indexes? A format linked to a specific
software or a documented, software-independent index format like rdfhdt? Or
will you initiate writing a specification for such a format?"
#### Answer
We are looking into supporting RDF/HDT and other open indexed formats, but the first milestone would to use IPLD which natively uses CBOR, a binary JSON format, as [JSON-LD](https://en.wikipedia.org/wiki/JSON-LD).
We would like to do a proof of concept doing a single [IPLD](https://github.com/ipld/ipld) dump of our raw index content. The next step would be live updated IPLD dumps.
We would be prepared to discuss including working towards this in the next grant period.
## **6) Can we break the project into smaller, reusable parts?**
"Considering the size of the requested grant and the apparent monolithic/
tightly integrated design, we worry that it's hard for other projects to reuse
parts of your work. What will you do to offer modules that can be reused?"
#### Answer
In the past year we have put a lot of work into unbundling the project into manageable pieces. We think we currently have reached a sane level of granularity.
In addition to the main [ipfs-search repository](https://github.com/ipfs-search/), we have made the following components available on GitHub:
* [tika extractor](https://github.com/ipfs-search/tika-extractor) - the webservice we use to extract metadata.
* [VueJS frontend ](https://github.com/ipfs-search/dweb-search-frontend)
* [A multi-headed IPFS DHT sniffer for ipfs-search, based on hydra-booster.](https://github.com/ipfs-search/ipfs-sniffer)
* A [microservice for searching the our Elasticsearch index](https://github.com/ipfs-search/ipfs-search-api)
Our crawler is available as reusable components [here](https://pkg.go.dev/github.com/ipfs-search/ipfs-search@v0.0.0-20210208181334-1ee5c96a0a31/components).
## **7) Sharing indexes vs federation with elessandra.io**
"Federation via elassandra.io would mean allowing federated machines to run
queries on each others hardware. That opens up a big attach surface. Wouldn't
it be better to share indexes and let federated hosts do the querying
themselves?"
#### Answer
Our index snapshots are fully auditable. Deleted and changed documents can be observed.
We publish our index as daily Elasticsearch snapshots on IPFS. More info can be found [here](https://github.com/ipfs-search/ipfs-search/blob/master/docs/snapshots.md).
With elessandra.io we would like to do a proof of concept distributed indexing where we focus on solving global coordination.
We are still looking into how to solve the significantly harder problem of trust in a decentralized system. Doing this would mean that we would have to fully trust our partners.