###### tags: `Side_Project` `DevOps` `Kubernets`
[TOC]
# K8S basic
---
## 1. What's Kubetnetes (K8S)

:::info
**Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications**
**`A lot of people get confused with Kubernetes as a PaaS solution — it’s not a PaaS solution. It is a platform to build PaaS solutions.`**
:::
---
## 2. Going back in time
**Traditional deployment era:**
Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.
**Virtualized deployment era:**
As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.
Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines.
Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.
**Container deployment era:** Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
Containers have become popular because they provide extra benefits, such as:
> * **Agile application creation and deployment**: increased ease and efficiency of container image creation compared to VM image use.
>* **Continuous development, integration, and deployment**: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
> * **Dev and Ops separation of concerns**: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
> * **Observability** not only surfaces OS-level information and metrics, but also application health and other signals.
> * **Environmental consistency** across development, testing, and production: Runs the same on a laptop as it does in the cloud.
> * **Cloud and OS distribution portability**: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
> * **Application-centric management**: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
> * **Loosely coupled, distributed, elastic, liberated micro-services**: applications are broken into smaller, independent pieces and can be deployed and managed dynamically – not a monolithic stack running on one big single-purpose machine.
> * **Resource isolation**: predictable application performance.
> * **Resource utilization**: high efficiency and density.
---
## 3. Why you need Kubernetes and what it can do
Why you need Kubernetes and what it can do
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a [**canary deployment**](https://octopus.com/docs/deployment-patterns/canary-deployments) for your system.
Kubernetes provides you with:
> * **Service discovery and load balancing**
> Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
> * **Storage orchestration**
> Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
> * **Automated rollouts and rollbacks**
> You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
> * **Automatic bin packing**
> You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources.
> * **Self-healing**
> Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
> * **Secret and configuration management**
> Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
---
## 4. Kubernetes Components
When you deploy Kubernetes, you get a cluster.
A Kubernetes cluster consists of a set of worker machines, called [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/), that run containerized applications. Every cluster has at least one worker node.
The worker node(s) host the [Pods](https://kubernetes.io/docs/concepts/workloads/pods/) that are the components of the application workload. The [control plane](https://kubernetes.io/docs/reference/glossary/?all=true#term-control-plane) manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
This document outlines the various components you need to have a complete and working Kubernetes cluster.




### **Control Plane Components**
The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied).
Control plane components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all control plane components on the same machine, and do not run user containers on this machine. See [Building High-Availability Clusters](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) for an example multi-master-VM setup.
#### **kube-apiserver**
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.
The main implementation of a Kubernetes API server is [kube-apiserver](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/). kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.
#### **etcd**
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
If your Kubernetes cluster uses etcd as its backing store, make sure you have a [back up](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) plan for those data.
You can find in-depth information about etcd in the official [documentation](https://etcd.io/docs/).
#### **kube-scheduler**
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
#### **kube-controller-manager**
Control Plane component that runs controller processes.
Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
These controllers include:
> * **Node controller**: Responsible for noticing and responding when nodes go down.
> * **Replication controller**: Responsible for maintaining the correct number of pods for every replication controller object in the system.
> * **Endpoints controller**: Populates the Endpoints object (that is, joins Services & Pods).
> * **Service Account & Token controllers**: Create default accounts and API access tokens for new namespaces.
cloud-controller-manager
#### **cloud-controller-manager**
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that just interact with your cluster.
The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.
As with the kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to help tolerate failures.
The following controllers can have cloud provider dependencies:
> * **Node controller**: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
> * **Route controller**: For setting up routes in the underlying cloud infrastructure
> * **Service controller**: For creating, updating and deleting cloud provider load balancers
---
### **Node Components**
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
#### **kubelet**
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
#### **kube-proxy**
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) concept.
[kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
#### **Container runtime**
The container runtime is the software that is responsible for running containers.
Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md).
### **Addons**
Addons use Kubernetes resources ([DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), etc) to implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the kube-system namespace.
Selected addons are described below; for an extended list of available addons, please see [Addons](https://kubernetes.io/docs/concepts/cluster-administration/addons/).
#### **DNS**
While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.
[Cluster DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
#### **Web UI (Dashboard)**
[Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard) is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.
**Container Resource Monitoring**
[Container Resource Monitoring](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/) records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
**Cluster-level Logging**
A [cluster-level logging](https://kubernetes.io/docs/concepts/cluster-administration/logging/) mechanism is responsible for saving container logs to a central log store with search/browsing interface.
---
## Kubernetes 組成元件
### 主元件
主元件 (master component) 組成了叢集的控制層,負責叢集中的整體性決策,例如 Pod 排程,以及偵測、回應叢集事件,例如在 replica 數量不足時產生新的 Pod。這些主元件通常都運行於同一台機器中,而這台機器通常不執行使用者的容器。
* #### kube-apiserver:
負責揭露 (expose) Kubernetes API,可視為Kubernetes 控制層的前端。
* #### etcd:
用於儲存 Kubernetes 資料的鍵值儲存系統。
* #### kube-scheduler:
負責將新建立的 Pod 分派至不同的節點,考慮的因素包括資源需求、軟硬體或政策的限制、資料的在地性 (locality) 等等。
* #### kube-controller-manager:
負責執行以下的控制器
* #### 節點 (node) controller:
當節點有問題 (go down) 負責通知與回應。
* #### 複本 (replication) 控制器:
維持每個 Replication controller 的 Pod replica 數量。
* #### 端點 (endpoint) 控制器:
產生 Endpoint 物件,Endpoint 物件可以想成是用來結合 Pod 和 Service 的物件,可以用 kubectl get endpoints 來查看。不知道為什麼這個指令的 endpoints 只能使用複數形式。
* #### 服務帳號 (service account) 及 token 控制器:
建立預設的帳號、為新的命名空間建立新的 API 存取 token。
* #### cloud-controller-manager:
負責執行和底層雲端供應商互動的控制器。從文件中的敘述中看來好像是要替雲端供應商和 Kubernetes 提供一個中間層。
---
### 節點元件
**節點元件 (node component) 則包含了 kubelet、kube-proxy 以及 container runtime (容器執行期)。**
* #### kubelet:
負責管理 Pod 裡的容器,根據容器的 spec 定義,確保它們運行的健康狀態。
* #### kube-proxy:
處理 Kubernetes 服務的抽象化、主機上的網路規則及轉送 (forwarding) 等等。
* #### container runtime:
例如 Docker、rkt 等等實際用來執行容器操作的引擎。
* #### Addons
addon 這個是附加元件或外掛程式的意思,在這裡是指實作叢集功能的 Pod 和 Service。在叢集的機器中使用 docker container ls 指令會看到很多容器在運行,這些容器中有些就是在執行這裡提到的 addon,一些常見的 addon 如下:
* #### DNS
Web UI (dashboard)
容器資源監控 (container resource monitoring)
叢集層級 log (cluster-level logging)
這些 addon 位在 kube-system 的命名空間之中。較詳細的
---
>Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes 是一個協助我們自動化部署、擴張以及管理容器應
用程式(containerized applications)的系統。相較於需要手動部署每個容器化應用程式(containers)到每台機器上,Kubernetes 可以幫我們做到以下幾件事情:
同時部署多個 containers 到一台機器上,甚至多台機器。
管理各個 container 的狀態。如果提供某個服務的 container 不小心 crash 了,Kubernetes 會偵測到並重啟這個 container,確保持續提供服務
將一台機器上所有的 containers 轉移到另外一台機器上。
提供機器高度擴張性。Kubernetes cluster 可以從一台機器,延展到多台機器共同運行。
:::spoiler **Reference:**
:mega: [介紹 Kubernetes 是什麼](https://ithelp.ithome.com.tw/articles/10192401)
:mega: [K8S的基本組件與概念](https://juejin.im/post/5eb4cdf36fb9a0436d41a4f3)
:mega: [Learnings From Two Years of Kubernetes in Production](https://lambda.grofers.com/learnings-from-two-years-of-kubernetes-in-production-b0ec21aa2814)
:::
:::info
## Resource
### 1. Tutorial for Kubernetes
* Introduction to Kubernetes
https://lnkd.in/gz4-zvdC
* Kubernetes tutorials:
Hands-on labs with certification
https://lnkd.in/g_2SVjvs
* Networking with Kubernetes | Basics of Kubernetes Networking
https://lnkd.in/gb7UpM6N
* Kubernetes Full Course | Kubernetes Architecture
https://lnkd.in/g8NATDPQ
* what is Kubernetest (playlist)
https://lnkd.in/gACGJzAq
* Docker Containers and Kubernetes Fundamentals - Full Hands-On Course
https://lnkd.in/gwtEN6hS
* Kubernetes for Beginner
https://lnkd.in/gdYZ4bgQ
* Kubernetes Tutorial for Beginners
https://lnkd.in/duGZwHYX
* Kubernetes Tutorial For Beginners - Learn Kubernetes
https://lnkd.in/gmjRkGSJ
* Kubernetes Full Course
https://lnkd.in/gqr2nzYT
* Kubernetes Course - Full Beginners Tutorial
https://lnkd.in/de84ESNv
🌐 Kubernetes Tutorial For Beginners
https://lnkd.in/gSRYYGPG
2. Labs
1. Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster
2. Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster
3. Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster
🌐 Link 🔗 https://lnkd.in/gpB4DNs6
🌐 Kubernetes 101 workshop - complete hands-on
https://lnkd.in/gUCeEjFF
🌐 Build a Kubernetes Home Lab from Scratch step-by-step!
https://lnkd.in/gM-kRUEh
🌐 Kubernetes Hands on
https://lnkd.in/guhw9iKa
🌐 Hands-on with Kubernetes on Cloud
https://lnkd.in/gTcKi2Fq
🌐 Kubernetes Project for beginners
https://lnkd.in/gSc2KDAb
3. Book
🌐 https://lnkd.in/gM7ts9XC
:::