# VPC Considerations (v2)
Operating an EKS cluster requires knowledge of VPC networking, in addition to Kubernetes networking.
We recommend you understand the EKS control plane communication mechanisms before you start to plan your VPC or use existing VPCs to create Kubernetes clusters.
Refer to [Cluster VPC considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) and [Amazon EKS security group considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) when architecting a VPC and subnets to be used with EKS.
[[GDC: SJ & JC - I'm skeptical that the below section is relevant to the BPG. It's more than a general introduction to the reccomendations. The customer and AWS VPCs are a key part of EKS.]]
## EKS Control Plane Communication
An EKS cluster consists of two VPCs:
* AWS-managed VPC that hosts the Kubernetes control plane
* Customer-managed VPC that hosts the Kubernetes worker nodes where containers run, as well as other customer-managed AWS infrastructure used by the cluster.
The worker nodes (customer VPC) need the ability to connect to the managed API server endpoint (AWS VPC). This connection allows the worker node to register itself with the Kubernetes control plane and to receive requests to run application pods.
Worker nodes connect to the EKS control plane through the EKS public endpoint or EKS-managed elastic network interfaces (ENIs). When a cluster is created, at least two VPC subnets are specified. EKS places managed ENIs on the specified subnets. EKS uses these managed ENIs to communicate with nodes and Kubernetes resources deployed on the node subnets.
[[GDC: delete following paragraph]]
You need to provide a minimum of two subnets in at least two Availability Zones for cluster create. You can deploy nodes and Kubernetes resources to the same subnets that you specify when you create your cluster, but this is not necessary. Nodes and EKS resources can be deployed to different subnets that meet the requirements listed in the [EKS user guide] (https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html# network_requirements-subnets). EKS doesn't create managed ENIs in these subnets.
The route that worker nodes take to connect is determined by whether you have enabled or disabled the private endpoint for your cluster. when private endpoints are enabled, EKS uses the EKS-managed ENI to communicate with worker nodes.
When only the public endpoint for the cluster is enabled (default), worker node-to-control plane communication leaves the VPC but not Amazon’s network. In order for nodes to connect to the control plane public endpoing, they must have a public IP address and internet connectivity (e.g., NAT gateway, internet gateway).
For deploy worker nodes in private subnets, configure an EKS-managed ENI for private cluster API endpoint access, or define a default route to a [NAT Gateway] (https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html).

## Recommendations
### Maintain DNS Hostname and Resolution Support
Your VPC must have DNS hostname and DNS resolution support, or your nodes can't register with your cluster
### Deploy NAT Gateways in each Availability Zone
If you deploy worker nodes in private subnets (IPv4 and IPv6), consider creating a NAT Gateway in each Availability Zone to ensure zone-independent architecture. Each NAT gateway in an AZ is implemented with redundancy.
### Enable Automatic IP Assignment for Nodes in Public Subnets
If you use public subnets, then they must have the automatic public IP address assignment setting enabled; otherwise, worker nodes will not be able to communicate with the cluster.