# VPC Scheme
Envs
- Production
- Staging
- Development
- Carbon
- Vault
Goals
- Link Vault to the rest of them, so that applications deployed in any env can access Vault.
- All prod systems within a single VPC, except Vault
The most performant method is _probably_ VPC peering them. Other things mentioned were Transit Gateways, and public Vault endpoint.
## VPC challenges
- only dual-AZs, A and B
- not well segmented - single subnet
- Need a VPC to put RDS into... where does that go?
- Can't put RDS into the current EKS VPCs, because those IP ranges are full
## Transit Gateways
Need to spike on this and understand how it works
Hypothetically, it could
- connect all the disparate components, eg RDS, Vault, apps, across different envs, and allow traffic to be routed between them
- do this without concern for conflicting CIDR spaces
- be suitable for lower traffic, since it is billed on throughput
## Possible Architectures
### Simplicity is King
- minimal VPC peering
- one or two VPCs for everything in prod
- maybe evertying in one, or maybe one for static and one for ephemeral infrastructure
- staging is in prod, eg as shadow deployments
- maybe rename "prod" to something else so we're not confusing metaphors
- make the one VPC really big, eg `10.*.*.*`
- a service, eg RDS, gets a subnet within which to assign instances
### In the Interrim ... where do we put Vault now?
- Test out Transit Gateways, see how far that gets us with the current generation of EKS
## Questions
- **Do we still need to have a separate VPC for ephemeral infra?** Brandon thinks EKS has introduced a change to subnet IP assignments that may allow us more flexibility.
-