# DNS Models for Kubernetes and Cilium. ## Overview of DNS. Typically CoreDNS is installed in a cluster at initialization with a deployment. This deployment will use the underlying host dns resolvers by default. ### How does Kubernetes Assign DNS to pods? ### Kubelet clusterDNS When a pod is deployed using a cni the pod is by default configured with dnsPolicy: ClusterFirst. Pods with this configuration are given the dns entries provided by the kubelet. When Kubernetes is deployed with a service cidr the first ip address in the cidr is allocated to the kuberenetes.default service. The 10th is allocated to the kube-dns.kubernetes service. This is done by providing the kubelet with the "clusterDNS" parameter. You can optionally override this parameter and the values given to pods on that specific node. This is one way to ensure that dns is deployed in a scalable way for clusters that changes things at an infrastructure layer. ### Pod DNS Configuration. There are also a variety of ways to configure pod DNS. #### dnsPolicy: Default A terrible name. This option uses the underlying nodes dns configuration. It's not the default by any interpretation. This is useful if you are not using kubernetes dns for service discovery. This is a way of offloading dns to the configured dns server on the node. #### dnsPolicy: None In this configuration it is expected that you will provide more specific dns configuration via the pod.spec.dnsConfig option. For more detail about this option take a look at kubectl explain pod.spec.dnsConfig or the [docs](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) For example: ``` dnsConfig: nameservers: - 192.0.2.1 # this is an example searches: - ns1.svc.cluster-domain.example - my.dns.search.suffix options: - name: ndots value: "2" - name: edns0 ``` This model would allow for multiple DNS Configurations in a single cluster. You can deploy CoreDNS multiple times and use this mechanism to configure workloads or namespaces to use just that specific dns service. Benefit: Kube dns and cilium fqdn policy will still work as designed. Detriment: Specific configuration needed for deploying things into the cluster. There are ways to automate much of this with a mutating controller like kyverno for example. ## Cilium FQDN Theory of operation. For cilium FQDN based policy to work the dns proxy needs to observe dns traffic that will be used to map fqdn traffic to the resolved addresses. To enable this we have to enable DNS observability. This work is for the most part specific to the node where traffic is being enforced. If a Pod on Node A sends a DNS request to a dns server and we are watching the traffic on egress from the workload. We will record the resolution of the dns request and assign the resolved fqdn an identity. This identity will map to all resolved addresses at the time of resolution. If the resolution changes we will update the identity with the new resolution and keep the old ones around as long as any workload on the node continues to use any of the ip addresses associated with that fqdn. The take away here is that we have to understand that this traffic is dns traffic and that when using an "entity" based policy all components in the datapath have to have an identity. Example policy: ```yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "fqdn" spec: endpointSelector: matchLabels: org: empire class: mediabot egress: - toFQDNs: - matchName: "api.github.com" - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": kube-system "k8s:k8s-app": kube-dns toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "*" ``` ## Kubernetes Pod DNS Options. ## Customizing CoreDNS. ## Augmenting CoreDNS. ## Changing the default POD DNS Settings via Kubelet. ## Changing the default DNS Setting via Pod.Spec ## Cilium Local Redirect Policy. ## Topology Aware Routing with Cilium. ## Topology Aware Routing with Kube Proxy.