IPFS nodes

go-ipfs node

  • runs on servers and user machines with the full set of capabilities
    • tcp and quic transports enabled by default
    • /ws/ transport disabled by default
    • http gateway with subdomain support for origin isolation between content roots

js-ipfs node

  • runs in the browser with a limited set of capabilities
    • can connect to server nodes (go/js-ipfs) only via secure websockets (/wss/ requires manual setup of TLS at the server)
      • NOTE: unless WSS, won't connect to "mainnet" DHT only to other js-ipfs nodes
    • can connect to browser nodes via webrtc (with help of centralized ws-webrtc-star signaling service)
    • no http gateway (can't open TCP port)
  • runs on servers and user machines with (in theory) the full set of capabilities
    • DHT not in par with go-ipfs (is this still the case?)
    • http gateway present, but has no subdomain support

Preload node

  • are go-ipfs nodes with their API ports exposed, some HTTP API commands accessible, and a patch applied
  • used by js-ipfs nodes, both in browser and not
  • js-ipfs nodes remain connected to the libp2p swarm ports of all preload nodes by having preload nodes on the bootstrap list
  • when users want to make some UnixFS DAG publicly available they call ipfs refs -r <CID> on a randomly chosen preload node's HTTP API which puts the CID in the preload nodes' wantlist which then causes it to fetch the data from the user
  • Other js-ipfs nodes requesting the content can then resolve it from the preload node via bitswap as the data is now present in the preload node's blockstore
  • Only works with dag-pb CIDs because that's all the refs command understands
    • Q: What are the net effects of this? Bad perf or broken js-ipfs for non-dag-pb CIDs? Are there mitigations?
    • A: Harder to find non-dag-pb content - e.g. you need a connection to the publishing js-ipfs instance or it needs to be put on the DHT by a delegate node. We could do this at the block level and use block stat in the same way as js-delegate-content module
  • Preload nodes garbage collect every hour so preloaded content only survives for that long
    • Q: Is this configurable?
    • A: Yes? Infra would be able to tell you more
  • TODO: Is there anything about pubsub topic bootstrapping here?

Relay node

  • are go-ipfs nodes
    • Q: or are they custom go-libp2p nodes?
  • can also be js-libp2p nodes properly configured, or the out of the box js relay
  • are used by go-ipfs nodes to serve as relays/VPNs for nodes who deem themselves to be unreachable by the public internet
    • Q: Used by js-ipfs too?
    • A: Yes. They can also be used to overcome lack of transport compatibility. For instance, a browser node with websockets/webRTC transports can talk with a go-ipfs node that only talks TCP, via a relay that support both transports. This is not enabled by default and needs to be setup.
  • not configurable in go-ipfs and uses a preset list of relays

Bootstrap node

  • are go-ipfs nodes
  • used by go and js-ipfs nodes to enter the DHT
  • if they go offline a go-ipfs node that restarts will not by default be able to join the public DHT
    • Q: SO MANY QUESTIONS to start, do you mean if all configured bootstrap nodes go offline this happens?
  • configurable in go and js-ipfs config files

Delegate routing node

  • are go-ipfs nodes with their API ports exposed and some HTTP API commands accessible
  • used by js-ipfs nodes to query the DHT and also publish content without having to actually run DHT logic on their own
  • publishing works with arbitrary CID codecs as the js-delegate-content module publishes CIDs at the block level rather than the ipld/dag level
  • Delegate nodes garbage collect every hour so provided content only survives for that long - unless the uploading js-ipfs node is still running, in which case it will issue periodic re-provides via the same publising mechanic which extends the life of the content on the DHT

Addenda

  1. Preload and delegate routing nodes are the same servers (go-ipfs nodes) though they are addressed independently so do not need to be - we have the choice to make them stand-alone processes in the future if we wish.
    • Q: "addressed independently" - what does this mean? a different place in config? or where the network communication happens in the stack/codepath?
    • A: different multiaddrs that resolve to the same physical (virtual?) machine - e.g. preload config, delegate config
  2. Preload, delegate and bootstrap nodes are all in the js-ipfs configuration as bootstrap nodes so it will maintain libp2p swarm connections to them at all times.
Select a repo