# Kubernetes-Basic
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.
Kubernetes is a opensource container orchestration system.
## Why Kubernetes is Imp:

| Virtualized Deployment | Container deployment |
| ---------------------- | -------------------- |
| 1. Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.|1. Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications|
| 2. Each VM has its own Filesystem, CPU, memeory as well as kernel |2. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more but they use the kernel of the base of |
| 3. VM are tightly coupled with underlying infra| 3. decoupled from the underlying infrastructure, they are portable across clouds and OS distributions |
## What Kubernetes can do ?
Containers are a good way to bundle and run your applications. Kubernetes takes care of scaling and failover for your application, provides deployment patterns, and more.
Kubernetes provide following feature:
1. Service discovery using DNS and traffic load balancing
2. Storage orchestration to mount a filesystem of your choice whether Cloud storage or local.
3. Automated rollouts and rollbacks
4. Self-healing
5. Secret and configuration management
# kubernetes Components:
1. **Nodes:** Set of worker machines.
2. **Pods:** Worker nodes host pods which are the components of the application workload.
3. **Control Plane:** The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

### Control Plane Components:
Control plane component makes global decision about the cluster like scheduling of pods, as well as detecting and responding to cluster events.
**1. Kube-Api Server:**
When you run a Kubectl command, the kubectl utility will reach Kube-ApiServer.
Kube-ApiServer first *"authenticates the request"* and *"validate it"*, then it retrieves the data from ETCD cluster and responds back with requested info.
Whenever we are making a POST request to create a pod, *Request is Authenticated first and then Validated*. API Server creates a pod object without assigning it to a node. *Updated the information in ETCD server*, updates the user that the pod has been created.
> ***Kube-Scheduler***, continuously monitors the API server and realizes that there is a new pod with no node assigned. It identifies the right node and place the new pod on that Node and communicates that back to Kube-ApiServer.
> API server updates the info to ETCD cluster.
After that, the kube-apiserver passes the information to Kubelet of that NODE and kubelet then creates the pod. Kubelet also instruct the CRE to deploy the application image.
**2. etcd:**
All Kubernetes objects are stored on etcd. etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
*Note: Always run etcd as a cluster of odd members.*
Backing up an etcd cluster can be accomplished in two ways: **etcd built-in snapshot** via `etcdctl snapshot save` and **volume snapshot**.
`
Every information from Kube control get command is from ETCD server. Every change doing to cluster is updatd to ETCD server. once updated then only change can be considered as complete.
`
**3. kube-scheduler**
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
*Factors taken into account for scheduling decisions include:*
```
individual and collective resource requirements,
hardware/software/policy constraints,
affinity and anti-affinity specifications,
data locality,
inter-workload interference,
and deadlines.
```
**4. kube-controller-manager:**
Control Plane component that runs controller processes. Logically, each controller is a different process but to reduce complexity, they are all compiled into a single binary and run in a single process.
***Type of controller:***
* Node controller: Responsible for noticing and responding when nodes go down.
* Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
* Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
* Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
**5. cloud-controller-manager**
A Kubernetes control plane component that embeds cloud-specific control logic.
This component is only when we launch kubernetes on cloud like AWS, AZURE or GCP or any other cloud. On on-premise, we don't have this component.
## Node component:
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
### kubelet:
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
> The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
### Kube-Proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
> kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
### Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports several container runtimes: Docker, containerd, CRI-O etc.
## Docker Swarm vs Kubernetes
**Docker swarm:**
Below components installs on the OS of master node:
api server
controlller
scheduller
database
These component will be required to be installed on Master node.
**Kubernetes:**
All the components of Master node of Swarm is installed on the container of Kubernetes and not directly on the OS. All these containers are inside the pods.
So the main difference is, when master node crashes in Docker swarm, we need other master node to handle all the workloads. In kubernetes, when one node crashes, the service container automatically moves across different nodes.