Describe the desired state of the world after this project? Why does that matter?
There is an inherent asymmetry in node capabilities imposed by the constraints of the operating environment. Attempts to overcome these limitations often lead to ad-hoc stop-gap solutions that introduce centralization into the system. Teams building products on IPFS are required to fill capability gaps with (unintentionally) incompatible infrastructure of their own.
IPFS HTTP API provides inadequate solution for constrained environments as it is designed to give a single client an absolute control of the host node as opposed to coordinating access of multiple tenants.
Here we propose design which embraces asymmetry among node capabilities. Instead of traditional client-server model, symbiotic network is formed through a protocol that enables IPFS nodes with limited capabilities and/or lifespan (from now on referred as symbionts) to partner with IPFS nodes that posses greater capabilities and/or lifespan (from now on referred as hosts).
Through the symbiosis protocol a host can announce services that symbionts can request to overcome own limitations.
Protocol defines m:n
relationship implying that a symbiont can have multiple partner hosts and a host could partner with multiple symbionts.
What must be true for this project to matter?
How would a developer or user use this new capability?
A symbiont on a phone may be paired with a host on a laptop to leverage improved capabilities when the two are on a same wifi network. At the same time a symbiont on the phone can be paired with a remote host over the internet to allowing all user devices to sync on the go.
Multiple symbionts that are embedded in web apps (on different origins) can request services from the host embedded in an IPFS Desktop. IPFS Desktop would surface a permission prompt, if user grants it, the host will enable the requested services to the requesting symbiont. (e.g allowing it to have local area network access to discover and connect to other IPFS nodes in the same wifi)
Cloud computing provider exposes IPFS in their serverless platform. To do so they spin up IPFS symbionts, pre-paired with a multi-tenant always-on host at the edge.
Desktop application embeds a node which can operate as a symbiont and/or a host. At startup it attempts to run as a symbiont that leverages an existing host (E.g exposed through a Unix Domain Socket). If host is not running, it starts one, which can be leveraged by itself and others.
Host A provides AWS Glacier as "cold storage" service. Host B abstracts filecoin markets & deals as "cold storage" service. Symbiont can leverage either or both via single interface.
How directly important is the outcome to our top-level mission?
🔥🔥🔥
How much would nailing this project improve our knowledge and ability to execute future projects?
🎯🎯🎯
How sure are we that this impact would be realized? Label from this scale.
Level 3
There is enough evidence supported by our own ad-hock stop-gap solutions combined with solutions our collaborators need to resort for to justify this level of confidence.
What specific deliverables should completed to consider this project done?
Success means impact. How will we know we did the right thing?
Why might this project be lower impact than expected? How could this project fail to complete, or fail to be successful?
How might this project’s intent be realized in other ways (other than this project proposal)? What other potential solutions can address the same need?
There is an inherent asymmetry in node capabilities imposed by the constraints of the operating environment
Other than throughs WebRTC, which has bunch of problems:
- Implementations in browsers (as of this writing) use significantly more resources (than http connections ).
- Require centralized signalling
- Require TURN servers to rely traffic when direct connections can't be establish.
- Are not available in worker threads.
In practice resulting in significant overhead, additional limitations while still relaying traffic through TURN server.
Attempts to overcome inherent platform limitations often leads to stop-gap solutions that introduce centralization into the system:
Preload nodes augment JS-IPFS, exposing API endpoints for IPFS that aren't otherwise available in web platform. In practice, that means JS-IPFS clients can add some content locally, then use a preload node to request that CID, effectively caching the data and allowing the browser tab to be closed without the data instantly becoming unavailable.
The addresses are hardcoded in js-ipfs and need to be tied to their specific peerid.
(Ab)uses IPFS HTTP API (specifically /api/v0/dht/findprovs
and /api/v0/refs
endpoints in order to leverage more capable node in the network to perform content routing calls.
Unfortunately it is not part of the protocol. So routing node discovery happens out of band (In practice that is hard coded address in the configuration). It also implies setting up damain name and SSL/TLS certificates on the routing node.
Custom HTTP API enabling local IPFS node to manage pins on remote node(s).
This is great solution enabling different services to be plugged in. Still, if nodes were able to develop symbiotic relationships delegated pinning would just be part of it.
Most discovery schemes supported in web environment requires centralized signalling service
Teams building products on IPFS resort to filling capability gaps with custom (unintentionally) incompatible infrastrcuture.
IPFS HTTP API provides inadequate solution to the constrained environments
It gives client an absolute control of the host node
This is inpractical even with a localhost as clients (apps) need to be isolated and their access needs to be revocable.
It provides inpracitcal path to incremental enhancements.
By default requests from web browsers are blocked (Due to CORS vialoation) which requires:
In the process host will drop all other clients. They will need to keep trying to reconnect and on success recreate local state
Fallback mechanism (like ipfs-provider) will needs to switch between JS-IPFS node and IPFS Client. This results in an observable difference:
Making it impractical when building competitive product experience.
Not a practical option on mobile
Similar problems are manifested in desktop operating systems
This was pressing issue when we had IPFS Desktop, Textile Desktop Photos and Radicle.
They all end up embedding differnt IPFS nodes to avoid this problem.
// TODO: Check how does IPFS Desktop, and Space Daemon get along with one another
Protocol draws it's inspired from DNS Service Discovery (DNS-SD) a zero-configuration networking technique.
Wire protocol that is agnostic of transport and representation (message encoding) (This makes it a good fit for cross thread communication and a cross network interface).
At the high level protocol allows IPFS nodes to announce services they can provide to other nodes on the network, which can be requested by connected nodes to overcome their limitations.
Protocol defines:
Interface enabling host node to announce services it can provide / lease.
How announcement gets delivered is not specified, it can be over local mDNS over pubsub off band etc…
Interface enables connected symbiont to request an access to set of capabilities. Request encodes:
Protocol assumes:
Public key cryptography is used to enforce access level (Typically public key would correspond to peer id, but protocol does not mandate this to enable various use cases).
Revokation reason can be arbitrary. E.g.
- Payment for service was not received.
- In IPFS Desktop user revokes permission to specific web page).
Protocol does not prescribe specific set of services with an assumbtion that they could be developed as needed. That said below are couple of examples: