# dotstorage reads pipeline - malware mitigation dotstorage reads pipeline is one of the entry points from users to request content stored in the IPFS network. Some of this content will be known within our system given we also are a storage provider and offer great performance for writes and reads when using all the stack. However, users from our reads pipeline can still use our reads pipeline to get content from any node in the IPFS Network. When content is stored with our writes pipeline, we can immediately trigger malware detection tools to evaluate it. Consequently, we can be moderately confident that content stored there will be safe (TBC this really depend on how we setup writes and reads, if we want to block reads before inspection and so on). Once content that was not previously stored in our writes pipeline is requested, we will end up going to the public network. (once we have dagula in a previous resolution layer to get content from Elastic IPFS directly) When the public IPFS gateways are used to try to resolve content we should perform extra validations to guarantee content is not harmful given it was not previously analysed. This information should be shared with w3malware (or request w3malware to look into it) List of potential work streams: 1. Check in the background for content validation > Wire reads pipeline with basic malware checker Goal (quick win): We can setup a quick PoC that relies on Google tooling to do an async inspection. If content is bad, we can immediately add it to our denylist and share with badbits. Once w3malware is in place, we can integrate with it as appropriate via the spec'ed protocol. Flow: * Detect if obtained content type is flagged for inspection. If so, just serve the content right away (TBD) and do an async request to get it verified * A CID verification system is invoked * Start by using https://cloud.google.com/web-risk/docs/evaluate-api?hl=en&authuser=1 to get an inspection of the content (please note that we need) * Requested access, but still not granted... * (TBD) (Once w3malware is up) * Use it directly to ask report * (TBD) report known results - needs protocol to be specified * Note: be aware that multiple calls can happen while on progress, we have track this to avoid multiple calls. As you might imagine, it would just trigger recursive checks * Invalidate cache for this CID in malicious https://developers.cloudflare.com/workers/runtime-apis/cache/#delete Architecture: * https://github.com/web3-storage/reads package `cid-verifier` (`observer`, `insights` or other name to be suggested) * Cloudflare Worker that bridges the reads IPFS edge gateway worker with the cid insights we know * To be used from edge-gateway as a worker binding * `POST /evaluate` HTTP route receiving CID and content path (probably just `/evaluate/${gatewayRequest.url}`) * KV to track ongoing processing of CIDs * Push logs to Loki (right interaction between workers) ![](https://i.imgur.com/iuxp4nh.png) Notes: * Once w3malware is in place and analysis content written into Web3.storage platform we don't need to try to evaluate content stored there. * (we can optimize for content stored with us) So, we can confidently go to Dagula to get content and only evaluate responses from final gateway race with public gateways * When `text/html` content, we need to evaluate if it is a phishing website, but it can also be any other kind of malicions content * let's just start with analysis for when `text/html` content is detected. This is the main current point in the blogposts * There is ongoing discussions on whether we should block serving content until there is a guarantee that content is legit * TBD 2. Wire up automation for reports > There are multiple data sources we should use. There are other IPFS gateways and content we never hear about that we can get to know being malicious before that CID ever being requested in our system Architecture: * A set of small servers running on a Digital Ocean apps that track new information Services: * API for `report-uri` from CSP https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/report-uri (more context at https://github.com/nftstorage/nftstorage.link/pull/172) * Server listen on logs from `cid-verifier` worker to report by mail to badbits * (TBC) Listen on events from `w3malware` * Server Uses google API + virustotal API + malware detection packages like https://www.npmjs.com/package/@passmarked/malware / https://www.npmjs.com/package/@passmarked/phishtank to track reports of malicious CIDs * we should probably select a few gateways to track, not only ours but ipfs.io, dweb.link, cf-ipfs.com * wire it up with remaining systems * ... 3. Refactor denylist into the `cid-verifier` > Make all malware codebase into the same scope Architecture: * `cid-verifier` to have access to DenyList KV instead of * `cid-verifier` new HTTP routes * `GET /denylist` * `POST /denylist` * `edge-gateway` to use `cid-verifier` worker binding to check denylist instead of direct access to KV (left interaction between workers) ![](https://i.imgur.com/iuxp4nh.png) Notes: * Currently Denylist has 2 components * KV storing hash representation of bad content * cron job to sync KV data with new content in https://badbits.dwebops.pub/denylist.json ## UPDATE 5 Sep ### KVs design We currently rely on KV denylist as the source of truth to know when content should be served. While with malware we want a KV to track responses from malware detection systems (like Google Evaluate API), when edge gateway wants to check if CID is malicious it just needs information that this CID is in denylist (so, no need to check two KVs every time). `cid-verifier` package should be able to receive requests to verify CIDs (POST) or to get known state of a CID (GET). On GET we should just check denylist. On POST we should put information about why content is malicious in the malware related KV, and also update denylist with the information that CID is malicious. `cid-verifier` KV should be agnostic to what service we use to evaluate CIDs. We now are using Google Evaluate, in the future we can add more (or just replace Google with other service). With that in mind, we just name it appropriately (maybe `CIDS_VERIFIED` or similar?) and have a data model that allows us to see results of multiple services at the same time. Current proposal is to have key with format cid/${sourceName}, value = sourceValue. So in this case, we would have for now `cid/google-evaluate-api`, so that than we can add further services. We should then rely on KV list API to get all the entries with preffix CID. This enables us to not need to mutate writes to have multiple values from different services in same value, which would be problematic with KV eventual consistency. Speaking of CF KVs being [eventual consistent](https://developers.cloudflare.com/workers/learning/how-kv-works/), we need to be careful on how we setup writes given if two concurrent requests happen (one in Europe and one in the USA for instance) there is a possibility of: req1.get() req2.get() req1.put(`PENDING/${date.now}`) req1.put(result) req2.put('PENDING') req2.put(result) ``` catch (e) { kv.delete() } ``` key = cid/google-service.lock value = pending key = cid/google-service value = google-value delete => key = cid/google-service.lock --- Eventually it will be fine in the end for this case. But, we also consider that when requests fail we **must** delete pending, it will get to issues. req1.get() req2.get() req1.put('PENDING') req2.put('PENDING') req2.put(result) req1 fails == DELETE PENDING With that in mind, we need a locking mechanism to not delete actual obtained values. Writing a key lock: cid/google-evaluate-api.lock with PENDING value will allow us to only delete such values and not the result value if we get one. This way, we never have issues with concurrent writes in same key. ### Decisions HTTP: GET = return 200 if we have something (bad or not, client check body) POST = return 204 if content is good, return 403/451 (or others) if malicious 404 if we don't have ### Draft proposed plan for this week 1. Refactor KVs usage per review 2. Wire up final testing 3. Test evaluate + lookup APIs as we previously talked about 4. Benchmark time that lookup takes for a few malicious content 6. ~~(if time), start writing service (Digital Ocean App) that listens on logs from `cid-verifier` loki instance and sends emails to badbits when malicious content is found (see status API)~~ TODO: - @vasco pnpm i to check build - align on versions ## Fri 9th September To Align: - Review inline suggestions - Threat confidence values https://cloud.google.com/web-risk/docs/reference/rest/v1eap1/TopLevel/evaluateUri#confidencelevel - do we want `HIGH` + `VERY_HIGH` + `EXTREMELY_HIGH` ? - HTTP Endpoints ``` GET /denylist --> this would get results from DENYLIST GET / --> (DO NOT IMPLEMENT NOW) this would return the results from CID_VERIFIER_RESULTS POST / --> this would trigger multiple services (now just google) and write results to CID_VERIFIER_RESULTS ``` - Status codes to return based on previous talk To follow up: - Error handling 404 Google API - Testing - Augment tests - ✅ in edge-gateway {cid1}, {cid1/index.html} and {cid2}, we should test the 3 flows - cid1 is good, but resource cid of cid1/index.html is bad - ✅ in edge-gateway make sure we are calling env.CID_VERIFIER.fetch({resourceCid}&url=${encodeURIComponent(request.url)}, { method: 'POST' }) - in cid-verifier, more coverage with different score types / confidence levels. Inspect KV, trigger concorrent requests in parallel and inspect KV, status codes - Benchmark (with staging + denylist temporarily disable) - GET / route with array of results - move denylist route handle to its own file - Basic HTTP Auth - safeguard only support nftstorage.link and w3s.link req url - basic token - only edge gateway - Validate CID is valid CID - Digital Ocean App handle logs to badbits - Cache Bad Responses (in edge gateway) - 1. make denylist its own package - 2. API to invalidate cache - Figure out auth with resource CID and guarantee it is from URL provided