---
title: K8S 實作練習 Docker
tags: Note_Linux, Docker, Linux
description: Docker, Linux, Program, Online
---
# K8S 實作練習 Docker
http://192.168.50.33:8000/
http://192.168.50.33:8000/
http://192.168.50.33/k8s_class.pdf
您的 IP 位址是 60.250.79.113
您的 IP 位址是 211.21.101.139 10/24 10:13
您的 IP 位址是 211.21.101.139 10/24 13:43
# Rocky Linux 用 systemctl 命令來查詢當前啟動的服務
在 Rocky Linux 上,你可以使用 systemctl 命令來查詢當前啟動的服務。以下是幾個常用的命令來查詢和管理服務:
查看所有服務的狀態:
使用 systemctl list-units 命令來查看所有服務單元的狀態:
```shell=
systemctl list-units --type=service
```
這個命令會顯示所有服務單元的狀態,包括啟動的和未啟動的服務。
查看當前正在運行的服務:
使用 systemctl list-units 命令來過濾出正在運行的服務:
```shell=
systemctl list-units --type=service --state=running
```
這個命令只會顯示當前正在運行的服務。
查看特定服務的狀態:
使用 systemctl status 命令來查看特定服務的狀態。例如,要查看 httpd 服務的狀態,可以使用以下命令:
```shell=
systemctl status httpd
```
列出所有已啟用的服務:
使用 systemctl list-unit-files 命令來查看所有已啟用的服務單元文件:
```
systemctl list-unit-files --type=service
```
這個命令會顯示所有服務單元文件及其啟用狀態(enabled、disabled 等)。
使用 ps 命令檢查服務進程:
你也可以使用 ps 命令來查看特定服務是否在運行。例如,檢查 nginx 服務:
```shell=
ps aux | grep http
```
這些命令可以幫助你管理和監控 Rocky Linux 上的服務。如果有特定的服務管理需求或問題,請告訴我,我可以提供更具體的幫助。
## Lab1:安裝Docker服務
```shell=
[student@localhost ~]$ sudo systemctl stop httpd
[student@localhost ~]$ sudo systemctl stop mariadb
```
+ 安裝CE版本移除舊版本套件
```shell=
sudo su -
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
```
+ 安裝Repository及套件,目前安裝版本『20.10.6』『24.0.1』
+ 要確認目前硬碟使用的磁區空間,安裝Docker需要些空間
```shell=
yum -y install yum-utils
```
```shell=
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce.x86_64
#yum install docker-ce-18.06.0.ce-3.el7 (上課不做)
```
Other 18.06.0 (上課不做)
```
#yum -y install yum-utils
# In Hosts
#310 pkill -9 yum
#311 yum clean all
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.0.ce-3.el7 -y
systemctl start docker;systemctl enable docker
```
docker_reinstall.txt (上課不做)
```
yum remove docker-ce docker-ce-cli-19.03.4-3.el7.x86_64
yum install docker-ce-18.06.0.ce-3.el7
```
## 如何在CentOS 8 上安裝 Docker 流程
https://intone.cc/2020/11/%E5%A6%82%E4%BD%95%E5%9C%A8centos-8-%E4%B8%8A%E5%AE%89%E8%A3%9D-docker-%E6%B5%81%E7%A8%8B/
+ 啟動Docker服務
```shell=
systemctl start docker;systemctl enable docker
```
+ 授予一般使用者操作docker權限(上課不做)
```
# usermod -G docker users
```
+ 檢查點:測試Docker版本
```shell=
docker version
```
```
Client: Docker Engine - Community
Version: 20.10.6
API version: 1.41
```
```shell=
docker info
```
```
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.6
```
## Lab2:容器管理
建置容器 - 以apline映像檔建立容器名稱“apline”
```
# docker pull alpine
# docker run -it --name alpine alpine sh
# df -h
```
https://quay.io/repository/libpod/busybox
```
docker pull quay.io/libpod/busybox
```
+ 查詢容器 查詢運行中容器
```
# docker container ls
```
```shell=
docker ps
```
+ 查詢非運行中容器
```
# docker container ls -a
```
```shell=
docker ps -a
```
+ 進入容器 - 進入容器apline控制台
```
# docker exec -it alpine sh
```
+ 離開容器控制台
+ 一般shell操作方式, exit 離開
+ 快速鍵 - ctrl+p, ctrl+q
+ 刪除容器
+ 停止容器後再刪除
```
# docker container stop alpine
# docker container rm alpine
```
+ 強制刪除運行中的容器
```
# docker container rm alpine -f
```
+ 刪除已停止的容器
```
# docker rm -f `docker ps -a -q --filter status=exited`
```
## Lab3:管理映像檔
+ 找尋映像檔
```
# docker search apline
```
+ 下載映像檔
```
# docker pull xiaoxijin/apline
```
+ 定義映像檔名稱
```
# docker tag xiaoxijin/apline apline-c1
```
+ 查詢映像檔apline-c1歷史資訊
```
# docker image history apline-c1
```
+ 匯出映像檔
+ 建立container, apline-c1
```
# docker run -it --name apline-c1 xiaoxijin/apline sh
/work # touch newfile
/work # exit
```
+ 將容器匯出為檔案
```
# docker container export apline-c1 > apline-c1.tar
```
+ 匯入為映像檔
```
# docker image import apline-c1.tar apline-new
```
+ 建立container apline-c2並查看檔案newfile是否存在
```
# docker run -it --name apline-new apline-new sh
```
+ 查詢映像檔apline-c1, apline-new歷史資訊
```
# docker image history apline-new:latest
```
+ 儲存/載入映像檔
```
# docker image save xiaoxijin/apline > xiaoxijin-apline.tar
# docker image load < xiaoxijin-apline.tar
```
+ 觀察匯出/匯入與儲存/載入的差異
+ 當執行匯出指令export, 可將目前container狀態匯出為本地檔案, 但將該檔案匯入為本地映像檔庫後, 為全新映像檔故過往記錄已被清除
+ 映像檔儲存/載入
+ 為轉移映像檔用途, 可保留原映像檔資訊
## Lab4:持久卷管理
語法:
-v host_dir:container_dir
+ 掛載主機目錄”/docker_data”至容器的目錄”/data”
```
# docker run -itd --name apline -v /docker_data/data:/data xiaoxijin/apline sh
# docker exec apline df -h
```
+ 未指定主機目錄(隨機建立)至容器的目錄“/data”
```
# docker run -itd --name apline -v /data xiaoxijin/apline sh
# docker container inspect apline|grep -i volume
```
+ 於目錄“/data”建立檔案, 並於觀察實體機目錄上是否有檔案被建立
```
# docker exec apline touch /data/1
# cd \ /var/lib/docker/volumes/5de406b5079242f7b12ec30d77ca7230cdfe7298129ee6a57550ba62c2781550/_data
# ls
```
## Lab5:架設WordPress部落格
### 容器化應用架構 - wordpress為例
+ Back-end應用
+ MySQL映像檔
+ 以MySQL映像檔啟動容器
+ 持久卷為MySQL容器儲存數據使用
+ Front-end應用
+ wordpress映像檔
+ 以wordpress映像檔啟動容器
+ 持久卷為wordpress容器儲存數據使用
+ 容器啟動順序
+ Back-end -> Front-end

---
+ 建立mariadb主機目錄“/docker_data/db1”, wordpress主機目錄“/docker_data/wordpress”
```shell=
mkdir -p /docker_data/db1
mkdir -p /docker_data/wordpress
```
+ 下載mariadb及wordpress映像檔
```shell=
docker pull mariadb
docker pull wordpress
```
+ 建立MySQL容器, https://hub.docker.com/_/mysql
```shell=
docker run -itd --name db1 -p 3306:3306 -v /docker_data/db1:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Redhat1! -e TZ="Asia/Taipei" mariadb --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
```
+ 執行MySQL容器指令,建立wordpress用資料庫wp
```shell=
[root@host ~]# docker exec -it db1 mariadb -uroot -pRedhat1!
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
MariaDB [(none)]> create database wp;
MariaDB [(none)]> exit
```
+ 建立Wordpress容器, https://hub.docker.com/_/wordpress
```shell=
docker run -it --name wordpress -p 80:80 --link db1:mysql \
-v /docker_data/wordpress:/var/www/html \
-e WORDPRESS_DB_NAME=wp \
-e WORDPRESS_DB_USER=root \
-e WORDPRESS_DB_PASSWORD=Redhat1! \
-e ServerName=localhost \
-d wordpress
```
檢查點:確定wordpress和mariadb的狀態STATUS
```
[root@localhost db1]# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78aa65fa4ade wordpress "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp wordpress
24c327e1aa12 mariadb "docker-entrypoint.s…" 13 minutes ago Up 13 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp db1
[root@localhost db1]#
```
```shell=
[root@Server55LEMP ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8060eb0e2959 wordpress "docker-entrypoint.s…" 3 hours ago Up 11 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp wordpress
dd642716c336 mariadb "docker-entrypoint.s…" 3 hours ago Up 11 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp db1
```
+ 查詢非運行中容器, 找mariadb和wordpress容器的ID(CONTAINER ID)
```shell=
docker ps -a
```
+ 啓動 mariadb和wordpress 容器
```shell=
docker start dd642716c336
docker start 8060eb0e2959
```
+ 查詢非運行中容器
```shell=
docker ps -a
```
+ 讓容器一開機就運行
```shell=
docker update --restart always dd642716c336
docker update --restart always 8060eb0e2959
```
+ 重新啟動Docker服務
```shell=
systemctl restart docker
```
```
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload
```
In PC Chrome,確定wordpress是否正常工作,可以連到mariadb
http://10.0.100.51/
http://master/wp-admin/
admin
## WordPress Markdown (WP Githuber MD 或 WP Editor.MD)
一個為 WordPress 網站提供全功能 Markdown 語法的外掛,功能包含 Markdown 編輯器、...
開發者: Terry Lin
安裝WP Githuber MD – WordPress Markdown 語法編輯器
```
外掛 --> 安裝外掛
WP Githuber MD
啟用 --> 設定
```

```
WP Githuber MD 模組設定
KaTex
流程圖
MathJax
```


```
外觀-->佈景主題 --> 安裝佈景主題 --> 搜尋佈景主題
Pixgraphy
```

```
## 1
katex
f(x) = \int_{-\infty}^\infty\hat f(\xi)\,e^{2 \pi i \xi x}\,d\xi
## 2
mathjax
f(x) = \int_{-\infty}^\infty\hat f(\xi)\,e^{2 \pi i \xi x}\,d\xi
## 3
flow
st=>start: User login
op=>operation: Operation
cond=>condition: Successful Yes or No?
e=>end: Into admin
st->op->cond
cond(yes)->e
cond(no)->op
```
0505 Docker: Wordpress驗收
{%youtube jfLBSX6ZYM4 %}
https://youtu.be/jfLBSX6ZYM4
## Lab6:客製映像檔
+ 下載映像檔centos7
```
# docker pull centos:7
# docker image ls |grep centos
+ Dockerfile前置準備作業
+ 建立目錄『webserver』
+ 將檔案"jdk-8u191-linux-x64.tar.gz"搬移到目錄『webserver』
+ Dockerfile
+ 製作映像檔的腳本, 檔名為Dockerfile
+ 製作映像檔時, Dockerfile檔案須與欲打包的程式位在同一層父目錄下
```dockerfile=
FROM centos:7
MAINTAINER users
RUN yum install -y wget
RUN cd /
ADD jdk-8u191-linux-x64.tar.gz /
RUN wget https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.35/bin/apache-tomcat-8.5.35.tar.gz
RUN tar zxvf apache-tomcat-8.5.35.tar.gz
RUN rm -f apache-tomcat-8.5.35.tar.gz
RUN echo "My Web Server" > /apache-tomcat-8.5.35/webapps/ROOT/web.html
EXPOSE 8080
ENV JAVA_HOME=/jdk1.8.0_191
ENV PATH=$PATH:/jdk1.8.0_191/bin
ENTRYPOINT ["/apache-tomcat-8.5.35/bin/catalina.sh", "run"] #Entrypoint與CMD擇一
CMD ["/apache-tomcat-8.5.35/bin/catalina.sh", "run"]
```
+ 參數RUN、COPY、ADD會產生映像層(layer),其它參數只會建立臨時映像檔,對最終映像檔大小無影響。
+ Build Image
+ -t 映像檔名稱
```
# docker build -t webserver .
```
+ 建立容器『webserver 』
```
# docker run -d --name webserver -p 8081:8080 webserver
```
+ 查看映像檔資訊
```
# docker image history webserver
# docker image history webserver --no-trunc
```
+ 比較以下指令的差異
```
# docker run -it --name webserver -p 8081:8080 webserver bash
# docker run -it --name webserver -p 8081:8080 webserver /apache-tomcat-8.5.35/bin/catalina.sh run
```
補充:
+ 建立容器gitlab
```
# docker run -itd --name gitlab \
-p 2222:22 \
-p 443:443 \
-v /docker_data/gitlab/data:/var/opt/gitlab \
-v /docker_data/gitlab/config:/etc/gitlab \
-v /docker_data/gitlab/logs:/var/log/gitlab \
gitlab/gitlab-ce
```
```
source <(kubectl completion bash)
[root@host ~]# vi .bash_profile
[root@host ~]# cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
source <(kubectl completion bash)
source <(minikube completion bash)
```
```
[root@host ~]# kubectl expose deployment nginx --type=NodePort --port=80
service/nginx exposed
[root@host ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36m
nginx NodePort 10.109.88.211 <none> 80:31383/TCP 36s
[root@host ~]# minikube ip
192.168.39.40
[root@host ~]# 192.168.39.40:31383
```
部署Master
+ 透過kubeadm初始化叢集
```
kubeadm init --service-cidr 10.96.0.0/12 \
> --pod-network-cidr 172.16.0.0/16 \
> --apiserver-advertise-address 0.0.0.0
```
```
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
```
+ 部署Worker Node(node1, node2)
```
kubeadm join 192.168.122.10:6443 --token g8hhfx.an74ft8hqkpfi99f \
--discovery-token-ca-cert-hash sha256:1fd1061c48a4123be1d478835ba447c905dfbe79520fd2aaf663b32c2d36e804
```
```
[root@host .kube]# cat config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://192.168.39.40:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/client.crt
client-key: /root/.minikube/client.key
[root@host .kube]# ls -l
total 12
drwxr-x--- 3 root root 4096 Mar 21 11:27 cache
-rw------- 1 root root 402 Mar 21 11:50 config
drwxr-x--- 3 root root 4096 Mar 21 12:02 http-cache
[root@host .kube]# mv config config.minikube
[root@host .kube]# pwd
/root/.kube
[root@host .kube]# scp master:/root/.kube/config .
```
再次於Master節點查詢各節點狀態
```
# kubectl get nodes -o wide
[root@host .kube]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com NotReady master 21m v1.15.0
node1.example.com NotReady <none> 19m v1.15.0
node2.example.com NotReady <none> 19m v1.15.0
```
```
[root@host yaml]# pwd
/root/share/yaml
[root@host yaml]# vi kube-flannel.yml
[root@host yaml]# kubectl apply -f kube-flannel.yml
[root@host yaml]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 30m v1.15.0
node1.example.com Ready <none> 28m v1.15.0
node2.example.com Ready <none> 28m v1.15.0
```
kubernetes安裝
## Lab7 - 安裝minikube
適用於單機測試, 該主機須透過虛擬化技術產生minibkue VM
* 安裝minikube
* minikube下載網址 - https://github.com/kubernetes/minikube/releases
* kvm2 driver - https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver
* 依作業系統版本選擇minikube版本,
* minikube 0.30.0為k8s 1.10.0
* minikube 1.0.1為k8s 1.14.1, 但有bug, 會卡住"Waiting for pods: apiserver proxy"
* 下載至/usr/local/bin目錄
```shell=
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube
```
```shell=
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.0.1/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube
```
* 安裝kubectl
* 透過套件安試
```shell=
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
```
```shell=
yum install -y kubectl-1.15.0
```
* 部署minikube
> 安裝kvm相關套件及一般使用者加入群組libvirt, 讓一般使用者允許權限執行
```shell=
yum install libvirt-daemon-kvm qemu-kvm
```
> 安裝docker-machine-driver-kvm2
```shell=
curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
&& sudo install docker-machine-driver-kvm2 /usr/local/bin/
```
> 開始初始化minikube
```shell=
minikube start --vm-driver kvm2
```
```
😄 minikube v1.0.1 on linux (amd64)
💿 Downloading Minikube ISO ...
142.88 MB / 142.88 MB [============================================] 100.00% 0s
🤹 Downloading Kubernetes v1.14.1 images in the background ...
🔥 Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 192.168.39.65
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.3-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
💾 Downloading kubelet v1.14.1
💾 Downloading kubeadm v1.14.1
🚜 Pulling images required by Kubernetes v1.14.1 ...
🚀 Launching Kubernetes v1.14.1 using kubeadm ...
⌛ Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑 Configuring cluster permissions ...
🤔 Verifying component health .....
💗 kubectl is now configured to use "minikube"
🏄 Done! Thank you for using minikube!
```

* 如欲重新建立minikube VM, 須清除『.minikube』目錄並重新執行部署
```shell=
rm -rf .minikube
```
```shell=
source <(kubectl completion bash)
[root@host ~]# vi .bash_profile
[root@host ~]# cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
source <(kubectl completion bash)
source <(minikube completion bash)
```
## Lab8 - 安裝K8S叢集
* 環境準備
* 準備三個VM, 主機名稱分別為
* 以下部驟於3個節點皆需要執行
```shell=
scp /root/k8s_class_lab/repo/*.repo master:/etc/yum.repos.d/
scp /root/k8s_class_lab/repo/*.repo node1:/etc/yum.repos.d/
scp /root/k8s_class_lab/repo/*.repo node2:/etc/yum.repos.d/
```
master, node1, node2
```shell=
yum install docker-ce-18.06.0.ce-3.el7 -y
systemctl start docker
systemctl enable docker
```
設定Docker自動補齊功能 - auto bash completion
安裝bash completion套件
```shell=
yum install bash-completion -y
```
* 環境準備
> 停用SWAP空間, SELINUX
> 修改檔案『/etc/fstab』將swap進行註解
```shell=
swapoff -a
#swapoff /dev/dm-1
vi /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0
```
> 關閉selinux, 修改檔案『/etc/selinux/config』
```shell=
vi /etc/selinux/config
```
>關閉防火牆
```
systemctl stop firewalld;systemctl disable firewalld;
```
> 設置系統參數參數
```shell=
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
sysctl --system
```
* kube proxy使用ipvs或iptables代理
> 安裝相關套件
```shell=
yum install -y ipvsadm conntrack sysstat curl
```
> 選擇預設iptables,執行以下指令載入模組
```shell=
modprobe br_netfilter
modprobe ip_vs
```
> 設置repo
> 安裝kubeadm, kubelet,kubectl套件
```shell=
yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
```
* 透過kubeadm進行部署
> 部署Master 透過kubeadm初始化叢集
master
```shell=
kubeadm init --service-cidr 10.96.0.0/12 \
--pod-network-cidr 172.16.0.0/16 \
--apiserver-advertise-address 0.0.0.0
```
> --service-cidr - Use alternative range of IP address for service VIPs. (default "10.96.0.0/12")
> --pod-network-cidr -Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
> 留存以下資訊, 提供work node加入叢集
```
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.122.10:6443 --token ktpl5i.gwx4yf1jkzgkldpx \
--discovery-token-ca-cert-hash sha256:bd739e2a004b67d509ed9ac66230d2abe4c8c0c44296d7259cc96bd9e19de7c9
```
```
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
[root@master /]# kubectl get nodes -o wide
[root@master /]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com NotReady master 11m v1.15.0
node1.example.com NotReady <none> 6m20s v1.15.0
node2.example.com NotReady <none> 6m5s v1.15.0
[root@master /]#
wget https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
```
Host
```shell=
[root@host yaml]# scp /root/k8s_class_lab/yaml/kube-flannel.yml master:/root
```
> kube-flannel.yml檔案需依建置叢集時的--pod-network-cidr一致, 修改net-conf.json欄位的 CIDR
```
net-conf.json: |
{
"Network": "172.16.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
```
> Master 建立overlay netowrk
```shell=
[root@master /]# vi kube-flannel.yml
[root@master /]# kubectl apply -f kube-flannel.yml
```
node1,node2
```
kubeadm join 192.168.122.10:6443 --token ktpl5i.gwx4yf1jkzgkldpx \
--discovery-token-ca-cert-hash sha256:bd739e2a004b67d509ed9ac66230d2abe4c8c0c44296d7259cc96bd9e19de7c9
```
> 查詢所有namespace的Pod狀態, 觀察於表分區『kube-system』開始建立相關Pod
```shell=
kubectl get pods --all-namespaces
```
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-69cvt 1/1 Running 0 63m
kube-system coredns-5c98db65d4-cnf69 1/1 Running 0 63m
kube-system etcd-master.example.com 1/1 Running 0 62m
kube-system kube-apiserver-master.example.com 1/1 Running 0 62m
kube-system kube-controller-manager-master.example.com 1/1 Running 0 62m
kube-system kube-flannel-ds-amd64-589sn 1/1 Running 0 22m
kube-system kube-flannel-ds-amd64-m2z28 1/1 Running 0 22m
kube-system kube-flannel-ds-amd64-nbplw 1/1 Running 0 22m
kube-system kube-proxy-djnf5 1/1 Running 0 57m
kube-system kube-proxy-jdqnr 1/1 Running 0 57m
kube-system kube-proxy-qwnfw 1/1 Running 0 63m
kube-system kube-scheduler-master.example.com 1/1 Running 0 62m
```
>於網路部署完成後再次查詢各節點狀態
```shell=
kubectl get nodes -o wide
```
```
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master.example.com Ready master 41m v1.15.0 192.168.122.10 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.6.0
node1.example.com Ready <none> 35m v1.15.0 192.168.122.11 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.6.0
node2.example.com Ready <none> 35m v1.15.0 192.168.122.12 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.6.0
```
host
```
[root@host .kube]# scp master:/root/.kube/config /root/.kube/
```
+ 設定Docker自動補齊功能 - auto bash completion
```
yum install bash-completion
```
```
source <(kubectl completion bash)
[root@host ~]# vi .bash_profile
[root@host ~]# cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
source <(kubectl completion bash)
source <(minikube completion bash)
```
## Lab9 - 安裝Dashboard
自從k8s 1.7版本後, dashboard只允許透過HTTPS協定存取, 僅能透過kubectl proxy或使用NodePort方式存取, 但建議在開發環境中使用
安裝dashboard
```
[root@host yaml]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
```
```
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
```
```
[root@host yaml]# kubectl -n kube-system get pods
```
```
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-hrghl 1/1 Running 0 57m
coredns-5c98db65d4-ldkh2 1/1 Running 0 57m
etcd-master.example.com 1/1 Running 0 56m
kube-apiserver-master.example.com 1/1 Running 0 56m
kube-controller-manager-master.example.com 1/1 Running 0 56m
kube-flannel-ds-amd64-2kq2z 1/1 Running 0 27m
kube-flannel-ds-amd64-dt67m 1/1 Running 0 27m
kube-flannel-ds-amd64-zdp9z 1/1 Running 0 27m
kube-proxy-2t95q 1/1 Running 0 55m
kube-proxy-n5qb9 1/1 Running 0 57m
kube-proxy-ncn2d 1/1 Running 0 55m
kube-scheduler-master.example.com 1/1 Running 0 56m
kubernetes-dashboard-7d75c474bb-cklrp 1/1 Running 0 6m38s
```
+ kubernetes-dashboard-7d75c474bb-cklrp 1/1 Running 0 6m38s
[root@master .kube]# ls
cache config config.master http-cache
[root@master .kube]# echo $?
0
[root@master .kube]# 1223
bash: 1223: command not found...
[root@master .kube]# echo $?
127
k8s_class.assets/
http://192.168.50.76:8000/assets/
Served by MarkServ | PID: 27345
## Day 3
```
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 5d19h v1.15.0
node1.example.com Ready <none> 5d19h v1.15.0
node2.example.com NotReady <none> 5d19h v1.15.0
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 5d19h v1.15.0
node1.example.com Ready <none> 5d19h v1.15.0
node2.example.com Ready <none> 5d19h v1.15.0
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 1974 5d17h
nginx-68fc4699d8-bbtbh 1/1 Running 1 5d16h
nginx-68fc4699d8-v49wn 1/1 Running 0 5d16h
nginx-68fc4699d8-zdvnd 1/1 Running 0 5d16h
[root@master ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-hrghl 1/1 Running 0 5d19h
kube-system coredns-5c98db65d4-ldkh2 1/1 Running 0 5d19h
kube-system etcd-master.example.com 1/1 Running 1 5d19h
kube-system kube-apiserver-master.example.com 1/1 Running 1 5d19h
kube-system kube-controller-manager-master.example.com 1/1 Running 1 5d19h
kube-system kube-flannel-ds-amd64-2kq2z 1/1 Running 1 5d18h
kube-system kube-flannel-ds-amd64-dt67m 1/1 Running 0 5d18h
kube-system kube-flannel-ds-amd64-zdp9z 1/1 Running 1 5d18h
kube-system kube-proxy-2t95q 1/1 Running 1 5d19h
kube-system kube-proxy-n5qb9 1/1 Running 1 5d19h
kube-system kube-proxy-ncn2d 1/1 Running 0 5d19h
kube-system kube-scheduler-master.example.com 1/1 Running 1 5d19h
kube-system kubernetes-dashboard-7d75c474bb-cklrp 1/1 Running 2 5d18h
```
## [手把手教你安裝、使用 docker](https://hackmd.io/@bluewings1211/SJkLOW9_l?type=view)
## [一小時Docker速成](https://hackmd.io/@SjM9gr25SJWYlf7xMLg62w/By4tdVvBQ?type=view)
## [利用 Dockfile、Docker Compose 建立 LAMP 環境](https://hackmd.io/@titangene/docker-lamp)
## [WSL (Windows Subsystem for Linux) + Docker](https://hackmd.io/@ebCv20MXS0y-pa8XsRnrOw/HyLfvFNnv)