# Kubernetes 安裝文件
<!-- ## 目前環境架構
**單一Master,配上多Worker,下階段再進行。**

## 基礎設定
環境規格,系統OS:ubuntu18.04
| Name | CPU | RAM | IP | Storage |
| ----------- | ------ | --- |:-------------:| ------- |
| k8s-master | 4 vCPU | 16G | 192.168.100.3 | 50G |
| k8s-worker1 | 4 vCPU | 16G | 192.168.100.4 | 60G |
更新所有機器之/etc/hosts,加入IP & hostname
```
192.168.100.3 k8s-master
192.168.100.4 k8s-worker1
``` -->
## 前置作業(所有node都必須符合)
### 網路環境需求

### 安裝相關套件與環境設定
```
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
```
安裝docker,建議參考[docker安裝教學](https://docs.docker.com/engine/install/ubuntu/)
如果是安裝1.22.x之前的版本,Docker 請多下列設定:
```bash!
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
### Note: overlay2 is the preferred storage driver for systems running Linux kernel version 4.0 or higher, or RHEL or CentOS using version 3.10.0-514 and above.
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
```
關閉swap:
```
# swapoff -a
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
```
note:在此步驟如諾是全新之VM或實體機器,建議先重開機一次,讓swap先產生再關閉。
設定IP 轉發 (如若網路環境較複雜有公有IP 或VIP 再進行設定 [參考連結](https://www.itread01.com/content/1535183902.html)):
```
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
```
### 如果需要nvidia docker
#### 安裝nvidia driver (再有gpu的node上)
Add nvidia ubuntu repo
```bash
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
```
install nvidia graph driver
```bash
sudo apt install nvidia-driver-510
```
#### 安裝 nvidia docker
Setup the package repository and the GPG key
```bash=
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
```shell=
$ sudo apt-get update
$ sudo apt-get install -y nvidia-docker2
```
edit docker daemon.json
```json=
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"exec-opts": ["native.cgroupdriver=systemd"]
}
```
為什麼使用 exec-opts 請看 [詳解](https://blog.51cto.com/riverxyz/2537914)也可以避免掉這個錯誤產生
```
The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
```
關於kubernetes version 可參考 [releases](https://kubernetes.io/releases/)
## Controlpanel Install
### Configure Kubernetes Package Repository & Install kubeadm kubectl kubelet
[官方文件](https://v1-22.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
```
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ export KUBE_VERSION="1.22.9"
$ sudo apt-get update && sudo apt-get install -y kubelet=${KUBE_VERSION}-00 kubeadm=${KUBE_VERSION}-00 kubectl=${KUBE_VERSION}-00
$ sudo apt-mark hold kubeadm kubectl kubelet
$ mkdir -p /etc/kubernetes/manifests/
```
建立一個yaml檔案,命名為kubeadm-config.yaml
```yaml=
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.22.9
controlPlaneEndpoint: "${you_ip}"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
serverTLSBootstrap: true
```
### Kubeadm Init
```
$ sudo kubeadm init --config=kubeadm-config.yaml
```
這步驟結束後,會出現join的命令,請複製起來於Worker中使用
設定使用者執行權限:
```
$ mkdir -p $HOME/.kube
$ cp -rp /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config`
```
查看node狀態:
```
$ kubectl get nodes
```
## 部署CNI的部分:
### 選項一 calico
```
$ wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml
```
```
$ sed -i 's/192.168.0.0\/16/10.244.0.0\/16/g' calico.yaml //這裡IP 要於yaml設定的 podSubnet 醫治
```
修改yaml內容 將Auto-detect the BGP IP address下內容註解
改為IP_AUTODETECTION_METHOD,並指定為出口網卡
```yaml=
# Auto-detect the BGP IP address.
#- name: IP
# value: "autodetect" //By Jack
- name: IP_AUTODETECTION_METHOD
value: "interface=ens5"
```
```
$ kubectl apply -f calico.yaml
```
查看 kube-system 所有Pods,是否都正常運行

### 選項二 cilium
請看[官方文件](https://docs.cilium.io/en/v1.9/gettingstarted/kubeproxy-free/)並使用helm安裝,如果想使用cilium來取代kube-proxy,要請在安裝kubeadmin的時候
--skip-phases=addon/kube-proxy,如果已經先init可以執行
```shell=
$ kubectl -n kube-system delete ds kube-proxy
$ iptables-restore <(iptables-save | grep -v KUBE)
```
#### Join Token
```
$ kubeadm token create --print-join-command
```
## Worker Node 安裝
[Kubeadm官方文件](https://v1-17.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
### 1. Configure Kubernetes Package Repository
```bash
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
### 2. Install Kubeadm
```
$ export KUBE_VERSION==<version>
$ sudo apt-get update && sudo apt-get install -y kubelet=${KUBE_VERSION}-00 kubeadm=${KUBE_VERSION}-00 kubectl=${KUBE_VERSION}-00
```
### 3. Add Worker Nodes to the Cluster
* go to Master node to get below information and execute it on Worker node
```
sudo kubeadm join <master_node_ip>:<port> --token <token> --discovery-token-ca-cert-hash <discovery-token-ca-cert-hash>
```
### 5. Check Worker node status on master
```
$ kubectl get nodes
```
## Add on Kubernetes Dashboard
[官方文件](https://github.com/kubernetes/dashboard)
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
```
#### 產生認證token
[官方文件](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md)
**建置yaml**
```yaml=
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
```
部屬Service
```
$ kubectl apply -f dashboard-adminuser.yaml
```
產生token
```
sudo kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
```

### Issue
剛建置完畢的Dashbord,會因為憑證不合法的緣故,Chrome無法開啟,[官方解法](https://github.com/kubernetes/dashboard/blob/master/docs/user/installation.md#recommended-setup)
#### 建置secret (請先準備好憑證)
```
$ kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kubernetes-dashboard
```
#### 修改yaml
```yaml=
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta8
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --tls-cert-file=/dashboard.crt
- --tls-key-file=/dashboard.key
#- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
#- --apiserver-host=https://192.168.100.3:6443
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.1
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
```
### 修改Service 並將內部 Cluster Ip 修改為 NodePort
```
kubectl edit service kubernetes-dashboard -n kubernetes-dashboard
```
## Add docker private registry in K8s
### 1. 建立daemon.json後重啟docker服務
(如果private registry 沒有憑證才需要此不)
/etc/docker/daemon.json
```bash=
{
"live-restore": true,
"insecure-registries": ["<host ip>:5000"]
}
```
### 2. 建立保管docker登入資訊的Secret
* 欲從private registry pull image的namespace都需建立secret
```bash=
kubectl create secret docker-registry regcred /
--docker-server=<your-registry-server> /
--docker-username=<your-name> /
--docker-password=<your-pword> /
--docker-email=<your-email>
-n <namespace>
```
* 建立成功後可以使用以下指令匯出該yaml檔
```bash=
kubectl get secret regcred --output=yaml
```
### 3. 由於是Private Registry,因此需要先在所有Node執行步驟一
### 4. 建立一個Pod測試
```bash=
apiVersion: v1
kind: Pod
metadata:
name: private-reg
namespace: <namespace>
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
```
## Kubernetes Service and Deployment 範例yaml
```yaml=
apiVersion: v1
kind: Service
metadata:
name: you_serice_name
labels:
app: you_serice_labels
spec:
type: NodePort
ports:
- port: you_exports
protocol: TCP
targetPort: you_container_service_ports
selector:
app: wiws_web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: you_serice_name
spec:
selector:
matchLabels:
app: you_serice_labels
replicas: 1
template:
metadata:
labels:
app: you_serice_labels
version: you_service_version
spec:
containers:
- name: wiwsweb-test
image: docker_private_registry_url/you_image_name:tag
resources:
limits:
cpu: you_service_vcpu_core(ex. "1")
memory: you_service_memory (ex. 2Gi)
ports:
- containerPort: you_service_ports
```
## Import 至 Rancher
### 安裝rancher
```
$ docker run -d --restart=unless-stopped -v ${host_paht}:/var/lib/rancher:rw --privileged -p 80:80 -p 443:443 rancher/rancher:v2.5.8
```
等待30s~1m之後,選取Add Cluster

在選取Other Cluster,輸入名子與選項後,會出現Import Cluster,執行命令。


### 一些錯誤處理的文章推薦
https://www.cnblogs.com/yangzp/p/15620155.html
###### tags:`Kubernetes` `k8s`