# w3up access-api deployments -> w3infra
tl;dr wdyt of moving access-api from d1+r2+cf-workers to ?+s3+lambda?
I get people don't really want to be using D1 with access-api.
Do we even want to be using cloudflare workers for it? Are you a fan of a potential future where we run access-api in aws lambda ?
It seems like it might be easier to reason about all the w3up stuff if all the business logic was in one place.
and not split across access-api in cloudflare vs upload-api in aws.
We could make it so w3protocol monorepo
* is a monorepo of many packages that are published to npm
* an access-api implementation package that is kinda like the upload-api implementation package that we now host there, but w3infra depends on as a library and deploys to aws.
Pros:
* then we'd have only w3infra as the monorepo that deploys running services to the cloud
* we'd have only w3protocol -> 'w3up'? as the monorepo that deploys packages to npm (to be consumed by the former)
*
Cons: time to get it there :confused:. We could phase as the following, and deliver in several milestones and dropping d1 along the way:
1. move off cloudflare-only datastores and use aws data stores instead (while still using cloudflare workers)
* D1
* to dynamodb
* or to s3 select + kinesis
* R2
* should be able to use S3 instead
* result:
* we no longer use D1, and instead use an AWS data store
* cloudflare workers is fetching data a lot from AWS, so probably paying a bit for egress from AWS, but it's worth it. overall access-api data transfer should be quite low
2. add access-api-aws package to w3protocol with a POC AWS lambda factory that passes access-api ucanto+http conformance tests using aws-lambda-test-utils
* result: we have an access-api lambda impl we can use
3. w3infra depends on access-api-aws and adds a lambda using the exported lambda factory (injecting in implementations of our data access objects that are backed by AWS Data Stores)
4. test w3infra access-api on aws lambda + aws data stores
5. decommission cloudflare access-api