# Amazon EKS Best Practices Guide for Networking (v2)
It is critical to understand Kubernetes networking to operate your applications efficiently. This section of the EKS Best Practices advises on different cluster networking modes.
## How to use this guide?
This guide introduces the [Amazon VPC Container Network Interface (CNI)](https://github.com/aws/amazon-vpc-cni-k8s) in the context of Kubernetes networking.
The VPC CNI is highly configuable to support different use cases. This guide further includes dedicated sections on different VPC CNI use cases, operating modes, and sub-components.
Generally, the VPC CNI plugin coordinates VPC resources with a Kubernetes Cluster. THE VPC CNI implements Kubernetes networking concepts using native AWS resources.
## Kubernetes Networking
Kubernetes sets the following requirements on cluster networking:
* Pods scheduled on the same node must be able to communicate with other Pods without using NAT (Network Address Translation).
* All system daemons (background processes, for example, [kubelet](https://kubernetes.io/docs/concepts/overview/components/)) running on a particular node can communicate with the Pods running on the same node.
* Pods that use the [host network](https://docs.docker.com/network/host/) must be able to contact all other Pods on all other nodes without using NAT.
The VPC CNI uses an IP per Pod model to meet the above requirements.

The AWS-provided VPC CNI is the default for EKS Clusters. Alternate CNIs may be used, either replacing or complementing the VPC CNI. Review the Kubernetes Documentation on CNI Plugins. [[insert link]]
CNI plugins have multiple responsibilties:
* Network interfaces: Adding or deleting pods to/from the Kubernetes pod network. This includes creating/deleting each pod’s network interface and connecting/disconnecting it to the rest of the network implementation.
* IPAM (IP Address Management): Allocating and releasing IP addresses for pods as they are created and deleted. Depending on the plugin, this may include one or more interfaces and ranges of IP addresses (CIDRs) to each node, allocate to pods.
[[ GDC: remove following paragraph. Cmd line arguments and config file structure are docs.]]
The CNI plugin is selected by passing Kubelet the --network-plugin=cni command-line option. Kubelet reads a file from --cni-conf-dir (default /etc/cni/net.d) and uses the CNI configuration from that file to set up each Pod's network. The CNI configuration file must match the CNI specification (v0.4.0) and any required CNI plugins referenced by the configuration must be present in --cni-bin-dir (default /opt/cni/bin). If there are multiple CNI configuration files in the directory, the kubelet uses the configuration file that comes first by name in lexicographic order.
## Amazon VPC CNI and Configuration Modes
The [Amazon VPC CNI plugin](./vpc-cni.md) offers Pod networking for EKS. The VPC CNI is highly configurable to address a variety of use cases, including:
* Extending AWS native security group features to Pods
* Increasing the number of IPs available on nodes to assign to Pods
* Enhancing Pod security by running Pods and nodes in different subnets
* Scaling beyond IPv4 limits
* Extending AWS native security group features to Pods
* Supporting different Amazon EKS node and operating system types
* Accelerating Pod launch times
* Establishing tenant isolation and network segmentation
* Allowing network seggregation via multi-homed Pods
The networking modes supported by Amazon VPC CNI can be boradly classified as:
* Secondary IP Mode
* Prefix Mode
* Security Groups Per Pod
* Custom Networking
Amazon VPC CNI exposes different configuration choices in each of the modes. We will broadly cover the setting under each of the modes and recommended values for each of the settings. Review [Amazon VPC CNI internals](./vpc-cni.md) before proceeding to guide sections dedicated to network modes.
## Understanding IPv6 and Kubernetes Dual-Stack
EKS’s support for IPv6 is focused on resolving the IP exhaustion problem which constrained by the limited size of the IPv4 address space. IPv4 exhaustion is a significant concern for cluster operators.
EKS's IPv6 implementation is distinct from Kubernetes’ “IPv4/IPv6 dual-stack” feature. Amazon EKS doesn't support dual-stacked pods or services. As a result, you can't assign both IPv4 and IPv6 addresses to your pods and services.
By default, EKS assigns IPv4 addresses to your Pods and Services. In an IPv6 EKS cluster, pods and services will receive IPv6 addresses while maintaining the ability for legacy IPv4 endpoints to connect to services running on IPv6 clusters, as well as pods connecting to legacy IPv4 endpoints outside the cluster.
This guide covers implementation details for IPv6. This includes best practices for running and migrating to IPv6 cluster. [[link]]
## Using Alternate CNI Plugins
AWS VPC CNI plugin is the only officially supported [networking plugin](https://kubernetes.io/docs/concepts/cluster-administration/networking/) on EKS. The VPC CNI provides native integration with AWS VPC and works in underlay mode. In Underlay mode, pods and hosts are located at the same network layer and share the network namespace. The IP address of the pod is consistent from the cluster and VPC perspective.
However, since EKS runs upstream Kubernetes and is certified Kubernetes conformant, you can use alternate [CNI plugins](https://github.com/containernetworking/cni). Alternate CNIs may work in an overlay network mode over an Amazon VPC. For example, Pod networking may be a simulated or encapsulated network overlaying the VPC.
One reason to choose an alternative CNI plugin is the ability to operate Pods without assigning each Pod a unique VPC IP address. The pool of available VPC IP addresses may be limited. Adopting a different CNI plugin may reduce network performance, which may be supoptimal for network-intensive workloads. If you intend to utilize an alternative CNI plugin in production, we recommend that you either obtain commercial support or develop in-house expertise in an open source CNI plugin project. Check the [EKS Alternate CNI documentation](https://docs.aws.amazon.com/eks/latest/userguide/alternate-cni-plugins.html) for a list of partners and installation instructions.
[[GDC: remove the following section. multus out of scope for first phase.]]
## Support for Multi-Homed Pods
Typically in Kubernetes each Pod only has one network interface (apart from a loopback). In certain situations, such as the telecommunications industry, Pods require multiple network interfaces to isolate different networks. The different networks may isolate management traffic from customer data. The Multus CNI is a plugin for Kubernetes that enables attaching multiple network interfaces to Pods. Pods with multiple network interfaces may be described as "multi-homed". The Multus acts as ‘meta’ plugin that can call other CNI plugins to configure additional interfaces.
Multiple network interfaces for Pods are useful in variety of use cases; examples include:
* Traffic splitting: Running network functions (NF) that require separation of control/management, and data/user plane network traffic to meet low latency Quality of Service (QoS) requirements.
* Performance: Additional interfaces often leverage specialized hardware specifications such as Single Root I/O Virtualization (SR-IOV) and Data Plane Development Kit (DPDK), which bypass the operating system kernel for increased bandwidth and network performance.
* Security: Supporting multi-tenant networks with strict traffic isolation requirements. Connecting multiple subnets to pods to meet compliance requirements.
The Multus section of this guide covers in detail on how to setup multi-homed Pods and best practices to run them at scale.