[TOC]
# ReplicaSet
## What is ReplicaSet?
- It’s a new generation of ReplicationController.
- **It replaces ReplicationController completely**.
:::warning
You should always create ReplicaSets instead of ReplicationControllers from now on.
:::
## Comparing: ReplicaSet vs. ReplicationController
- ReplicaSet has more expressive pod selectors(label selector):
- ReplicationController
- Only allows matching pods that include a **certain label**.
- ReplicaSet
- Allows matching pods that:
- lack a certain labely.
- include a **certain label key**, regardless of its value.
- eg. env=*
- A single ReplicaSet can match more than one sets of pods and treat them as a single group.
## Example: YAML Definition of a ReplicaSet
```yaml=
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
```
> line 1 `apiVersion: apps/v1beta2`: ReplicaSets belong to the apps API group and version v1beta2.
> line 7-9 `matchLabels`: matchLabels selector, much like the `seletor` in ReplicationController.
> 
> >Instead of listing labels the pods need to have directly under the selector property, you’re specifying them under **`selector.matchLabels`**.
## Hands-on: Create a ReplicaSet
- Create a ReplicaSet
```bash=
kubectl create -f my-rs.yaml
```
- Examine the ReplicaSets
```bash=
kubectl get rs
kubectl describe rs
```
- Expressive pod selectors usage: **`matchExpressions`**
- You can rewrite the selector to use the more powerful **matchExpressions** property.
```yaml=
selector:
matchExpressions:
- key: app
operator: In
values:
- kubia
```
> line 3: This example requires the pod to contain a label with the “app” key.
> line 5-6: The label’s value must be “kubia”.
- Four valid operators in **`matchExpressions`**
- `In`
- Label’s value must match one of the specified `values`.
- `NotIn`
- Label’s value must not match any of the specified `values`.
- `Exists`
- Pod must include a label with the specified key.
- When using this operator, you shouldn’t specify the `values` field.
- `DoesNotExist`
- Pod must not include a label with the specified key.
- The `values` property must not be specified.
:::info
使用多於一種 `matchExpressions` 或是搭配 `matchLabels` 一起使用時,只有所列條件全部符合的 Pod 才算有 match。
:::
- Delete a ReplicaSet
```bash=
kubectl delete rs kubia
```
# DaemonSet
## What is DaemonSet?
- While **ReplicaSets** are used for running **a specific number of pods** deployed **anywhere** in the K8s cluster,
:::info
**DaemonSets** run pods on **each and every node** in the cluster.
:::
- There are some example cases using **DaemonSets**:
- log collector
- resource monitor
- Kubernetes’ own kube-proxy
- A DaemonSet doesn’t have any notion of a desired replica count.
- If a node goes down, the DaemonSet doesn’t cause the pod to be created elsewhere.
- A DaemonSet only ensures that a pod matching its pod selector is running on each node.
- If a new node is added to the cluster, the DaemonSet immediately deploys a new pod instance to it.
:::info
A DaemonSet deploys pods to all nodes in the cluster, unless you specify the **`nodeSelector`** property in the pod template.
:::
## Example: YAML Definition of a DaemonSet
```yaml=
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabels:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
nodeSelector:
disk: ssd
containers:
- name: main
image: luksa/ssd-monitor
```
> line 14-15: a node selector selects nodes with the `disk=ssd` label.
## Hands-on: Create a DaemonSet
- Create a DaemonSet
```bash=
kubectl create -f my-daemonset.yaml
```
- Examine the DaemonSets
```bash=
kubectl get ds
```
- If there is no pod running after creating a DaemonSet, check if you remenber to label your nodes.
- The DaemonSet should detect that the nodes’ labels have changed and deploy the pod to all nodes with a matching label.
## Hands-on: Label the Nodes in Cluster
- Assign a new label to a node
```bash=
kubectl label node my-node1 disk=ssd
```
- Replace with another label
```bash=
kubectl label node my-node1 disk=hdd --overwrite
```
# Job
- It allows you to run a pod whose container isn’t restarted when the process running inside finishes successfully.

## Example: YAML Definition of a Job
```yaml=
apiVersion: batch/v1
kind: Job
metadata:
name: batch-job
spec:
completions: 5
parallelism: 2
template:
metadata:
labels:
app: batch-job
spec:
restartPolicy: OnFailure
containers:
- name: main
image: luksa/batch-job
```
> 不需要 pod seletor: 因為 pod 工作完成後不會再重建。
> line 6: ensure five pods complete successfully.
> line 7: Up to two pods can run in parallel.
> line 13 `restartPolicy`: Jobs can’t use the default restart policy, which is Always. You need to explicitly set the restart policy to either **OnFailure** or **Never**.
## Hands-on: Create a Job
- Create a Job
```bash=
kubectl create -f my-job.yaml
```
- Examine the Jobs
```bash=
kubectl get jobs
```
- After the pod complete its job. You won't see it while typing `kubectl get po`, you have to add the`-a` option.
```bash=
kubectl get po -a
```
- And you can examine the log of that Job.
```bash=
kubectl logs batch-job
```
# CronJob
- 如果想要再指定的時間執行工作可以使用 CronJob。
- 建立方式與 Job 差不多,基本上只差在可以指定時間。
## Example: YAML Definition of a CronJob
```yaml=
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: batch-job-every-fifteen-minutes
spec:
schedule: "0,15,30,45 * * * *"
jobTemplate:
spec:
<省略... 跟 Job 的設定檔寫法一致>
```
> line 6 `schedule`: This job should run at the 0,15,30 and 45 minutes of very hour, every day.
- The schedule contains the following five entries:
* Minute
* Hour
* Day of month
* Month
* Day of week