# Delete /raw from S3 buckets
## Previous work
`/raw` files have previously been copied over to `carpark-prod` bucket using [carpark-backfill](https://github.com/web3-storage/carpark-backfill).
The [backfil](https://github.com/web3-storage/carpark-backfill/blob/main/packages/backfill/index.js) jobs running in ECS Containers would go through a given ndjson file with list of files to copy, and `copy+index` each one of these files. The `copy+index` [function](https://github.com/web3-storage/carpark-backfill/blob/main/packages/backfill/copy.js#L12) would read the file, compute a side index for it, and write all necessary files to destination buckets (CAR file itself, side index, and dudewhere rootCID mapping).
To kick off the `backfill` pipeline, the ndjson files were generated using [bucket-diff](https://github.com/web3-storage/carpark-backfill/tree/feat/run-list-update-jobs/packages/bucket-diff) (note it is in a different branch to ease deployments at the time). The `create-list` [function](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/index.js#L20) simply [pipes](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/index.js#L37) a function that lists by [prefix](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/create.js#L7) the bucket and stores batches of lists of files to copy over.
When first copy job ran with all the files created by `bucket-diff`, there were seen several errors of files failing to copy. Therefore, `bucket-diff` was extended with an [`update` list](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/index.js#L53) functionality. This [pipes](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/index.js#L77) [reading](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/update.js#L15) rows from a given list, [filter](https://github.com/web3-storage/carpark-backfill/blob/feat/run-list-update-jobs/packages/bucket-diff/update.js#L59) if they were already stored (HEAD request to destination CAR bucket, side index bucket and dudewhere mapping bucket) or are in the denylist. For the files verified, if they were not in the badbits, and the HEAD request failed for any of the destination buckets, they were identified in a follow up file of failures to copy over.
Status of this copied was registered on https://docs.google.com/spreadsheets/d/1CLjpd-W53dJOovbKQeqdV-QE_K3WX8IGCTNSuzcvgDk/edit#gid=0.
There were 3 iterations running this:
- create list + copy and index
- update list (diff) + copy and index
- update list (diff)
The second update list (diff) returned empty files proving everything was copied over with index created and dudewhere mapping written (unless in badbits).
Some previous thoughts and stats: https://hackmd.io/@olizilla/delete-s3
## Usage metrics
The above process of copy and verification by creating a diff list should give us strong guarantees that everything was successfully copied over. Writes also included SHA256 to guarantee no bits flipped.
However, we can gather a few more metrics to increase our confidence.
Setup `dotstorage-prod-raw-analysis` for [`prod-0`](https://s3.console.aws.amazon.com/s3/bucket/dotstorage-prod-0/metrics/storage_class_analysis/view?region=us-east-2&id=dotstorage-prod-raw-analysis) and [`prod-1`](https://s3.console.aws.amazon.com/s3/bucket/dotstorage-prod-1/metrics/storage_class_analysis/view?region=us-west-2&id=dotstorage-prod-raw-analysis&tab=dashboard) that should give us sizes of reads currently in use (should be quite small by now). TBD (needs 24 hours to run)
We can see (not filtered `/raw` prefix metrics) that reads from `dotstorage-prod-0` are quite [low](https://s3.console.aws.amazon.com/s3/bucket/dotstorage-prod-0/metrics/storage_class_analysis/view?region=us-east-2&id=dotstorage-prod-1-storage-class-analysis) by now. `dotstorage-prod-1` still has a considerably [high](https://s3.console.aws.amazon.com/s3/bucket/dotstorage-prod-1/metrics/storage_class_analysis/view?region=us-west-2&id=dotstorage-prod-1-storage-class-analysis) read volume.
In the unlikely event that some data is not available, we can always relive with using our Filecoin backups.
## Implementation
Depending on whether we feel comfortable with guarantees we have today, we can move forward in different ways.
### Lifecycle based deletion
We can specify S3 lifecycles [filtered by a prefix](https://docs.aws.amazon.com/AmazonS3/latest/userguide/intro-lifecycle-rules.html#intro-lifecycle-rules-filter) to [expire objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html). This allows us to put AWS on control of the deletion process instead of us running a script.
```
<LifecycleConfiguration>
<Rule>
<ID>delete-raw</ID>
<Filter>
<Prefix>raw//</Prefix>
</Filter>
<Expiration>
<Hours>24</Hours>
</Expiration>
</Rule>
</LifecycleConfiguration>
```
[Considering AWS docs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html)
> There may be a delay between the expiration date and the date at which Amazon S3 removes an object. You are not charged for expiration or the storage time associated with an object that has expired.
We can consider that will be easier and cheaper to flag all items for deletion instead of deleting them in a script. As soon as they are flagged as expired, we will not need to pay for them. And of course, no need to spend money on a container to run a delete script.
Based on [blog post](https://plainenglish.io/blog/how-to-easily-delete-an-s3-bucket-with-millions-of-files-in-it-ad5cec3529b9) reporting big s3 deletion in 2021 Q1, we could expect 20 days to delete everything, which would imply that using this would be way cheaper.
### Deletion dependent on validation
In case above guarantees are not good enough to trust we can delete data from raw directory, we can get a speed running that goes through the bucket for deletion. It should list the entire bucked in the `raw/` prefix. For each entry, we should validate if given entry can be deleted by checking whether it is in all required buckets. If so, we issue the delete command manually.
While this option is the safest, it is in a way repeat what the diff command already did (unless we want to add further validations). It will also add extra costs to run, and make us pay for storage for longer.
## Proposal
Currently, CF Hosted legacy write APIs still write to `dotstorage-prod-1` together with writing to Cloudflare R2 bucket. Main reason to still write to `dotstorage-prod-1` is to guarantee that block level indexes are created by E-IPFS `indexer-lambda`. On the reads side of things, only `hoverboard` may read from these buckets. It currently prefers to read from R2's carpark and infers R2 keys for `raw/*` files based on the old naming present in the dynamo table indexing blocks.
We feel relatively confident by migration previously done that all files (`raw/*`) were successfuly migrated and have necessary indexes in place to be available on the several entry points of our reads pipeline.
As a last method to make sure we can go ahead and delete all raw files, we put together an analysis dashboards for bucket reads from both [`dotstorage-prod-0`](https://s3.console.aws.amazon.com/s3/bucket/dotstorage-prod-0/metrics/storage_class_analysis/view?region=us-east-2&id=dotstorage-prod-raw-analysis) and [`dotstorage-prod-1`](https://s3.console.aws.amazon.com/s3/bucket/dotstorage-prod-1/metrics/storage_class_analysis/view?region=us-west-2&id=dotstorage-prod-raw-analysis&tab=dashboard) on prefix `raw/*` to see if we are still reading files from any of these buckets. As expected `dotstorage-prod-0/raw/*` did not have any reads in last 24 hours and is ready for deletion. However, `dotstorage-prod-1/raw/*` had reads that we were not expecting. After @vasco-santos and @olizilla brainstormed together, they realized that this is likely reads from `indexer-lambda` to create the block level indexes for content written through old APIs, given content read in last 24 Hours from this bucket was only content younger than 15 days.
On `dotstorage-prod-0/raw/*` side of things, we seem ready to delete these files. For these, we should just use a Lifecycle configure with 24 Hours as:
```
<LifecycleConfiguration>
<Rule>
<ID>delete-raw</ID>
<Filter>
<Prefix>raw//</Prefix>
</Filter>
<Expiration>
<Hours>24</Hours>
</Expiration>
</Rule>
</LifecycleConfiguration>
```
On `dotstorage-prod-1/raw/*` side, we currently have new files still being written and these files need to be read to create block level indexes until we sunset old APIs. We propose therefore to also rely on a lifecycle configuration to delete them, but over a long period like 14 days, which give us enough space if something goes wrong before deletion.
```
<LifecycleConfiguration>
<Rule>
<ID>delete-raw</ID>
<Filter>
<Prefix>raw//</Prefix>
</Filter>
<Expiration>
<Days>14</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
```
Given we still see some read usahe (that we believe must come from the indexing blocks), there is a chance that we are missing something and some files were not properly migrated (quite unlikely given we are only serving recent files that on upload are uploaded to both CF R2 and S3). If there is any concern from the team, we can spend some more engineer hours putting together metrics in hoverboard of blocks served by each bucket to make one last validation and monitor for a few more days.