Oli Evans

@olizilla

Joined on Mar 28, 2018

  • When a user wants their stuff to be available over ipni+bitswap (aka ipfs) their client creates a new key pair for that space. The user (not us!) can then create and signs the ipni advert saying "these blocks are available from hoverboard + <this new random peerid>" and sends us the adverts to send on to cid.contact. We store the ads in their space 'natch, billable, noice. We store the mapping of space to peerid key (new but trivial). So now results for IPNI queries for a users blocks will come back with a multiaddr for hoverboard+per space peer id. On bitswap/libp2p read, the public peerid is sent as the request path, and hoverboard will go look up the secret key to use for that request. It finds it and away we go... We also just got back some data locality as we know we're only gonna be looking at cars for the space associated with the peerid. AND WE CAN NOW ASSIGN ALL BITSWAP TRAFFIC COSTS TO A SPACE if we want to or whatever I'm not excited you are.
     Like  Bookmark
  • We store CARs in buckets. Reads over http or bitswap have to read individual blocks from the CAR to build up the response. To make a range request for just the bytes of a block we need to know the offset of the block bytes from the start of the CAR and it's byte length. The CAR V2 Multihash Sorted Index is not designed to support this use case and does not give us that info. In the terminology of CARs, a Block is a (CID,bytes) pair. When we handle a read request, we only really care about fetching the block bytes. Each Block is also prefixed with a varint that specifies
     Like  Bookmark
  • (e-ipfs mk2). Run miniswap in a Cloudflare worker handling bitswap requests via websockets. Like E-IPFS without having to manage EKS infra. Motivation Lower bandwidth egress costs from CF. bitswap-peer is crash looping and its getting worse. The elastic in the E-IPFS has lost its snap. There is too much infra in e-ipfs for the team to support. Hosting it on cloudflare instead of aws + eks would mean 🎉 Memory management easier when worker per peer/connection
     Like  Bookmark
  • pickup mk2 - local-first pinning service :::info this doc describes two ideas! beam app: the pinning service is you! (easy win) beam as a service: running pickup in a durable object (experts only) ::: beam app
     Like  Bookmark
  • Create signed urls to have users put CARs to a CF worker (again). It verifies every block from every car and writes the complete indexes to R2, DUDEWHERE style. We now have a "hash-on-write" guarantee, and combined with the CAR CIDs we can shuffle the data around inside the system safe in the knowledge that we dont have to recheck the individual blocks. We also have a complete index of all blocks in R2 so w3s.link can serve all requests from R2. We can stop the slow and expensive w3s.link -> ipfs.io -> E-IPFS readspipe. see w3s.link redirect We can give hoverboard 🛹 full indexes, so it has full info to serve any bitswap request from R2 only However We moved away from PUTs via a worker so that we could remove limitations. We would be back to
     Like  Bookmark
  • Lets redirect to where the data is, instead of proxying. TL;DR Redirect requests for stuff we dont have to dweb.link. Stop proxying. Set up a freeway-like gateway on aws and redirect requests to it for stuff we have but only have the index for in aws... Or a satnav-api that lets freeway ask the e-ipfs db where a non-root block is. ...and this doc does not attempt to solve for future issues like retrieving things from Filecoin SPs, only moving around our current features. There are 3 scenarios we have to handle when processing a request for GET /ipfs/:cid
     Like 1 Bookmark
  • 5 weeks, ending Mar 10 🔴 P0 🟡 P1 🟢 P2 🔴 w3access + w3provider + w3session [ ] Finish implementing w3provider (MVP free tier provider)[ ] w3protocol#347 Email template improvements - @travis, @alanshaw [ ] w3protocol#348 GET -> POST on email validation links - @travis, @alanshaw [ ] w3protocol#341 Postmark daghouse domain move to web3.storage - @travis, @alanshaw [ ] Finish implementing w3access
     Like  Bookmark
  • hugo specs! write up of capability negotiation protocol. (demo'd on Friday) spec lint. to get rid of boring errors. start implementing! vasco data-stack working for filecoin pipeline with unit and integration tests... (demo'd on Friday) NEEDS REVIEW! aggregation goes from ingest -> ready Need to sync with Riba next: trigger filecoin pipeline on CAR written to R2
     Like  Bookmark
  • Oli ⚡️unblocked and landed @travis' great customisable components work ⚡️report and fix CID.parse not throwing on v0 string with explicit multibase prefix ⏭ making w3console space selector Vasco ⚡️ tracking down json bug for a dweb.link node, that we had reported to us via w3s.link, raised with netops ⏭ w3filecoin: working on data stack, wrestling cdk config. dynamo db stream consumer config... no docs, so having to trial and error the config by deploying it.we are supposed to provide the commp for each CAR that goes into an aggregate. 🗣 sync'd on dag-as-service (reciepts) with gozala & hugo
     Like  Bookmark
  • The web3.storage ipfs cluster contains ~320TiB of: All DAGs pinned via the pinning service All DAGs for direct uploads from day 1 until 2022-09-21 (when we switched to writing them to R2) Some care must be taken to get the data off cluster as it is the only thing storing: DAGs pinned via the pinning service direct uploads created before 2021-09-27 (after which we started writing uploads to S3 as well as cluster.
     Like  Bookmark
  • Getting new uploads into Filecoin deals using ♠️ Motivation Uploads made to the new api are not being stored in Filecoin. The dagcargo implementation that we use for the old api was intended to be a short term fix. It's on life-support maintenance only. We could update it to source CARs from the new w3up s3 bucket, but the preference is to move on to the new way... ...and dagcargo aggregation was block based and expensive when we recieve huge uploads with many blocks. We have an oportunity to simplify it by creating aggregates for deals out of the existing user uploaded CARs. Implementation details
     Like  Bookmark
  • nftstorage.link 🎟🔗 everywhere 🌐 all the time 💯 Problem ❌ Let's limit the blast radius of the red screen'o'death Solution 🧪
     Like  Bookmark