# W3NAME
# Shortcuts
* [Zenboard](https://app.zenhub.com/workspaces/web3storage-product-6106d25b58fccc00105aedcb/board?labels=pi%2Fw3name)
* [System diagram](https://excalidraw.com/#json=7-CV06quxTC10qE_P5HjG,O6fglbNrqs7oKewmEc8qYg)
* [Cloudflare w3name-staging worker](https://dash.cloudflare.com/fffa4b4363a7e5250af8357087263b3a/workers/services/view/w3name-staging/production)
# Table of Contents
[TOC]
# Demo
1. Bring up the excalidraw export. Do 1-2 talkover explaining the architecture https://excalidraw.com/#room=4abae94ede80839b2aa2,IlZnvxzR_cB1HpdvsUN46w
backup link: https://excalidraw.com/#json=emHQBaKhiF5ZKR421PQZa,gJzzvrF1fRPJg5Dnw2M7UQ
2. Checkout these scripts https://gist.github.com/francois-potato/415283c3b41072cd4bb3a2df9fd8c324
3. Fetch two file CIDS (preferably images)
4. Run the first script to create a record, noting the private key being written to FS.
5. Check IPFS Gateway to check the record, script will print gateway links (Argon os the fastest)
6. Run the seconds scrpt to update the prevously created record, incrementing the seqno
7. Check a Gateway to show the record has been updated.
# Background
The aim is to take the IPNS (mutability) API and service out of Web3.Storage and provide the functionality for other projects (e.g. NFT.Storage, Uploads v2).
# Approach
## MVP
We will start by extracting w3name from Web3.Storage while using the existing database. This will enable us to have w3name running separately in a short amount of time.
[Current and future system diagram](https://excalidraw.com/#json=FujaFeqqswYZ8_7D4jjq7,xyoNCLYRqkl3slISvxTIGQ)
### Extract w3name code to a new repository ([ticket](https://github.com/web3-storage/w3name/issues/1)) (5 days estimation)
- Setup CI
- Remove authentication
### Run new repository as it own Cf worker ([ticket](https://github.com/web3-storage/w3name/issues/2)) (3 days estimation)
- Setup release please
- Setup domain name, dns
- Setup staging?
### Implement CF Durable Object (DO) for storing IPNS records ([ticket](https://github.com/web3-storage/w3name/issues/3)) (5 days estimation)
### Configure new JS client module and publish it on npm ([ticket](https://github.com/web3-storage/w3name/issues/4)) (3 days estimation)
### Provide w3name functionality in Web3.Storage JS module ([ticket](https://github.com/web3-storage/web3.storage/issues/1424)) (3 days estimation)
### Point ipns-publisher to w3name endpoint ([ticket](https://github.com/web3-storage/ipns-publisher/issues/2)) (3 days estimation)
### Implement 12h complete rebroadcast of IPNS records to the DHT ([ticket](https://github.com/web3-storage/w3name/issues/5)) (8 days estimation)
- When creating or updating a new request in POST /name/:key, create/update the corresponding DO add an entry in Cloudflare KV
- Implement 24 hours alarms on the durable object
- When the alarm wake up, POST broadcast request to `ipns-publisher`
- Add a `/broadcast` endpoint to `ipns-publisher` and call dht.put when receiving a request
Currently there is no way to list all DO instances from the runtime API or the Cloudflare API. list() is for listing the keys inside a DO instance not listing DO instances. This makes it impossible to query the data externally. One solution is to use DO alarm; each IPNS DO instance is responsible to request a republish.
Because there is currently no way to list all DO instances we should keep a record of the keys. I think we can use both Cloudflare KV and DO at the same time. KV will gives us the possibility to at a later time list of the existing IPNS requests key we have in the system.
### Run ipns-publisher on AWS with the help of NearForm SRE team ([ticket](https://github.com/web3-storage/ipns-publisher/issues/1)) (1 day estimation)
### Migrate IPNS records from existing database to durable objets ([ticket](https://github.com/web3-storage/w3name/issues/6)) (2 days estimation)
### Create auto-generated docs page for w3name api documentation ([ticket to be created](https://github.com/web3-storage/w3name/issues/2)) (1 day estimation)
### Create a landing page and documentation website (out of scope)
- Build the landing page & documentation website as a Cloudflare page (based on Open API docs, taking the [pinning API for inspiration](https://ipfs.github.io/pinning-services-api-spec/))
- Update [existing documentation](https://github.com/web3-storage/web3.storage/tree/main/packages/client#mutability)
## Future improvements
After the MVP is implemented, we propose to make the following improvements.
### Implement rate-limiting based on IP
- Implement in code or with Cloudflare
### Enable ipns updates over pubsub
* [Update configuration](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#ipnsusepubsub)
### Refactor ipns publisher to handle high utilization
* If we see a steep increase in the utilization of w3name, we will propose a new architecture involving sharding or using a queue/worker setup
# Rolling notes
## 09/06/22
* We had a meeting with Mikeal this Tuesday
* Created tickets
* See pi/w3name and topic/pot tag on Zenhub
* Got access to Cloudflare
* Started working on tasks
# New questions
* What is the rationale behind storing the data in durable objects?
* Why is the W3name service considered a read service?
> [Mikeal]: for the Future section, i wouldn’t consolidate into AWS, we are consolidating read services into Cloudflare
# Questions covered with Alan on Tuesday 31 May
* We need to spin this up now, as Uploads v2 will need it. Why is this? How will Uploads v2 use this?
Ask Hugo
* Where will the API live?
* Where it is now in a CF worker in web3.storage
* Move it to an AWS lambda
* Move it to a new CF worked, completely separate from the current web3.storage implementation
* > Familiar with CF worker
Better for maintenance
Separate worker
Alan in favour of moving DB from Heroku, so long as it still gives us an HTTPS interface with the same transactional guarantees.
DB contains a stored procedure which runs transactionally, so AWS DB would need to offer same consistency. (https://github.com/web3-storage/web3.storage/blob/main/packages/db/postgres/functions.sql#L430-L447)
* Where will the new database live?
* AWS RDS?
* HTTP interface on AWS? Tradeof could be easier to use lambda.
* IP based rate limit: is there any pre-existing thing we can use?
* Should we explore any changes on the ipns-publisher?
* It could run as a CRON job talking directly to the new database
* > Need a long running ipfs node, maintains a list of peers. Using the GoIPFs implementation
* > 1. Listen to the IPFS network and ingest published changes into the cache in our DB. Possibly not a strong enough need for this yet, but we will probably want it at some point.
* > 2. P2P pub/sub for publishing changes to records (like the current websocket thing) which broadcasts changes to records so that people can listen to changes without having to get updates from the DHT. This might just be a case of configuring the IPFS node to use pub/sub.
* > [goipfs pubsub doc](https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#ipnsusepubsub)
* Where should the website/documentation live?
* Cloudflare pages?
* Inspiration https://ipfs.github.io/pinning-services-api-spec/
* Should be deployed separately (static, CDN), not bound to the deployment of the API endpoint.
> Watch out for the client library: reduce size of the crypto library imports [see](https://github.com/web3-storage/web3.storage/tree/main/packages/api/src/utils/crypto). We could potentially pull this out as a separate package and then install it into the client library.
# Questions covered with David on Tuesday
* Is this still the case?
* > Name records are not /yet/ published to or updated from the IPFS network. Working with name records simply updates the Web3.Storage cache of data.
* There is https://github.com/web3-storage/web3.storage/pull/932 but this is for users of the service who want to listen to our service for a particular record (not the service itself listening to the DHT for updates done elsewhere).
* 
* Is this the part that Digital Ocean does?
* > It broadcasts record from the database to the network.
* Where is the code for the part in digital ocean? Is this the app that publishes to the DHT every 24 hours? Is it a CRON job?
* > https://github.com/web3-storage/ipns-publisher
* > It publishes changes to the key.
* Do we need to set up a new database? Who will do that? Do we need to architect it?
* > Standalone database in AWS. We need to work with the SRE team. We should put together a suggested architecture. They can also help us out deploying the code in Digital Ocean into AWS.
* > This is a general service, so it can be stored in a new db in AWS. Also for w3s all data around uploads and pins will be migrated to AWS when Uploads v2 goes live.
* > Long term there is a plan to move away from Heroku.
* Do we turn off the service in web3.storage? And when, if yes? How much is it used?
* > Yes we turn it off, at the same time as the new service goes live. Not used super heavily for the time being (326 record updates in the database).
* Should the functions read out a 'deprecated' note and refer you to the new service?
* > We will have an 'informational grace' period.
* Static HTML build for the website? Or build in React so it can become an application over time?
* > For the time being it can be a static build.
* Any requirements on the docs build from an architecture pov?
* Create an Open API auto generated docs for the API.
* Adding a docs page inside web3.storage
# Links
* [Notion initiative](https://www.notion.so/w3name-service-8c3dcfffd551415eae17a34462582d35)
* [IPNS publisher](https://github.com/web3-storage/ipns-publisher)
* [Initial w3name implementation](https://github.com/alanshaw/w3name)
* [Existing w3s mutability client](https://github.com/web3-storage/web3.storage/tree/main/packages/client#mutability)