# 全離線安裝 Talos linux k8s
## PreRequest
- [x] 已將 Talos Linux 作業系統安裝至硬碟中。
- k8s 架構為 1m2w
## 事前準備 talos linux k8s 所需 image
* opensuse rescue 工具下載連結
https://download.opensuse.org/distribution/leap/15.4/live/
> 選擇 openSUSE-Leap-15.4-Rescue-CD-x86_64-Build85.5-Media.iso
## opensuse vm 設定
* 而外再新增三個 disk 分別掛載 talos-m1、talos-w1、talos-w2 的 vmdk

## 將 opensuse vm 開機
* 按 Enter 進入 OS

* 在桌面,滑鼠右鍵,選擇 Open Terminal here

* 修改 root 使用者密碼為 root
```
$ echo -e "root\nroot" | sudo passwd root
$ sudo systemctl enable --now sshd
```
* 安裝 containerd、ctr 和 tree 命令
```
$ zypper in -y tree containerd containerd-ctr
$ mv /usr/sbin/containerd-ctr /usr/sbin/ctr
```
## 使用 root login opensuse
* 檢視 Disk Partition 資訊和檔案系統類型
* sdb、sdc、sdd 分別是 talos-m1、talos-w1、talos-w2 的 disk
```
$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0
squash 4.0 0 100% /run/overlay/squashfs_container
loop1
ext4 1.0 d574ab73-a029-4f7e-933d-9f655de31f06 720.1M 72% /run/overlay/rootfsbase
sda
sdb
├─sdb1
│ vfat FAT32 EFI 35C6-03AB
├─sdb2
│
├─sdb3
│ xfs BOOT 48457131-e9b9-4050-ae3b-5f9f13c83b55
├─sdb4
│
├─sdb5
│ xfs STATE 838244fd-329c-49ee-a464-006a6d5b6659
└─sdb6
xfs EPHEMERAL c42cc8ac-2d31-428d-aff5-69ef30f3a3bd
sdc
├─sdc1
│ vfat FAT32 EFI EEB0-1A26
├─sdc2
│
├─sdc3
│ xfs BOOT 372a8388-a87c-4e88-8006-6392c14d621e
├─sdc4
│
├─sdc5
│ xfs STATE 4ececab0-cbce-4dd8-938b-0cc55fda93f7
└─sdc6
xfs EPHEMERAL 33320a7d-1f5f-4c36-97a6-d393d52c110e
sdd
├─sdd1
│ vfat FAT32 EFI EF41-969B
├─sdd2
│
├─sdd3
│ xfs BOOT e07e755f-f515-42ed-bcd5-c8bd13216846
├─sdd4
│
├─sdd5
│ xfs STATE e00ea3d2-189d-4b9f-830c-0866acf04027
└─sdd6
xfs EPHEMERAL f514f295-8700-4cba-aba0-4e6cc1da1213
sr0 iso966 Jolie openSUSE_Leap_15.4_Rescue_CD
2023-05-12-14-37-10-00 0 100% /run/overlay/live
```
* 建立目錄
```
$ mkdir -p /talos-m1/{efi,boot,state,EPHEMERAL}
$ mkdir -p /talos-w1/{efi,boot,state,EPHEMERAL}
$ mkdir -p /talos-w2/{efi,boot,state,EPHEMERAL}
```
* 掛載 talos-m1 Disk
```
$ mount -t vfat /dev/sdb1 /talos-m1/efi
$ mount -t xfs /dev/sdb3 /talos-m1/boot
$ mount -t xfs /dev/sdb5 /talos-m1/state
$ mount -t xfs /dev/sdb6 /talos-m1/EPHEMERAL
```
* 掛載 talos-w1 Disk
```
$ mount -t vfat /dev/sdc1 /talos-w1/efi
$ mount -t xfs /dev/sdc3 /talos-w1/boot
$ mount -t xfs /dev/sdc5 /talos-w1/state
$ mount -t xfs /dev/sdc6 /talos-w1/EPHEMERAL
```
* 掛載 talos-w2 Disk
```
$ mount -t vfat /dev/sdd1 /talos-w2/efi
$ mount -t xfs /dev/sdd3 /talos-w2/boot
$ mount -t xfs /dev/sdd5 /talos-w2/state
$ mount -t xfs /dev/sdd6 /talos-w2/EPHEMERAL
```
* 檢視 talos-m1 目錄區結構
```
$ tree -L 3 /talos-m1
/talos-m1
├── EPHEMERAL
│ ├── lib
│ │ ├── containerd
│ │ ├── etcd
│ │ └── kubelet
│ ├── lock
│ │ └── lvm
│ ├── log
│ │ ├── audit
│ │ ├── containers
│ │ └── pods
│ ├── run
│ │ └── lock
│ └── system
│ └── overlays
├── boot
│ ├── A
│ │ ├── initramfs.xz
│ │ └── vmlinuz
│ ├── EFI
│ └── grub
│ ├── fonts
│ ├── grub.cfg
│ ├── grubenv
│ └── i386-pc
├── efi
└── state
├── config.yaml
├── node-identity.yaml
└── platform-network.yaml
23 directories, 7 files
```
* 檢視 talos-w1 目錄區結構
```
$ tree -L 3 /talos-w1
/talos-w1
├── EPHEMERAL
│ ├── lib
│ │ ├── containerd
│ │ └── kubelet
│ ├── lock
│ │ └── lvm
│ ├── log
│ │ ├── audit
│ │ ├── containers
│ │ └── pods
│ ├── run
│ │ └── lock
│ └── system
│ └── overlays
├── boot
│ ├── A
│ │ ├── initramfs.xz
│ │ └── vmlinuz
│ ├── EFI
│ └── grub
│ ├── fonts
│ ├── grub.cfg
│ ├── grubenv
│ └── i386-pc
├── efi
└── state
├── config.yaml
├── node-identity.yaml
└── platform-network.yaml
22 directories, 7 files
```
* 檢視 talos-w2 目錄區結構
```
$ tree -L 3 /talos-w2
/talos-w2
├── EPHEMERAL
│ ├── lib
│ │ ├── containerd
│ │ └── kubelet
│ ├── lock
│ │ └── lvm
│ ├── log
│ │ ├── audit
│ │ ├── containers
│ │ └── pods
│ ├── run
│ │ └── lock
│ └── system
│ └── overlays
├── boot
│ ├── A
│ │ ├── initramfs.xz
│ │ └── vmlinuz
│ ├── EFI
│ └── grub
│ ├── fonts
│ ├── grub.cfg
│ ├── grubenv
│ └── i386-pc
├── efi
└── state
├── config.yaml
├── node-identity.yaml
└── platform-network.yaml
22 directories, 7 files
```
## 匯入 image 至 talos-m1
* 設定 talos-m1 存放 image 位置
```
$ nano /etc/containerd/config.toml
version = 2
# persistent data location
root = "/talos-m1/EPHEMERAL/lib/containerd/"
```
* 啟動 Containerd
```
$ /usr/sbin/containerd &
$ ps aux | grep -v grep | grep containerd
root 2686 1.6 0.7 2023984 63036 pts/1 Sl 06:15 0:14 /usr/sbin/containerd
```
* 使用 ctr pull image
* 因為 containerd 存放 image 有 namespace 的概念,而給 k8s 使用的位置是放在 `k8s.io`
```
ctr -n k8s.io image pull ghcr.io/siderolabs/flannel:v0.22.1
ctr -n k8s.io image pull ghcr.io/siderolabs/install-cni:v1.5.0-3-gb43c4e4
ctr -n k8s.io image pull registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io image pull gcr.io/etcd-development/etcd:v3.5.10
ctr -n k8s.io image pull registry.k8s.io/kube-apiserver:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-controller-manager:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-scheduler:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-proxy:v1.28.3
ctr -n k8s.io image pull ghcr.io/siderolabs/kubelet:v1.28.3
ctr -n k8s.io image pull ghcr.io/siderolabs/installer:v1.5.5
ctr -n k8s.io image pull registry.k8s.io/pause:3.6
```
* 檢視 talos-m1 image
```
$ ctr -n k8s.io image ls
```
* 停止 Containerd process
```
$ kill -9 2686
[1]+ Killed /usr/sbin/containerd
```
* umount talos-m1 檔案系統
```
$ umount /talos-m1/efi
$ umount /talos-m1/boot
$ umount /talos-m1/state
$ umount /talos-m1/EPHEMERAL
```
## 匯入 image 至 talos-w1
* 設定 talos-w1 存放 image 位置
```
$ nano /etc/containerd/config.toml
version = 2
# persistent data location
root = "/talos-w1/EPHEMERAL/lib/containerd/"
```
* 啟動 Containerd
```
$ /usr/sbin/containerd &
$ ps aux | grep -v grep | grep containerd
root 3768 0.2 0.5 1800996 45760 pts/1 Sl 06:31 0:00 /usr/sbin/containerd
```
* 使用 ctr pull image
```
ctr -n k8s.io image pull ghcr.io/siderolabs/flannel:v0.22.1
ctr -n k8s.io image pull ghcr.io/siderolabs/install-cni:v1.5.0-3-gb43c4e4
ctr -n k8s.io image pull registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io image pull gcr.io/etcd-development/etcd:v3.5.10
ctr -n k8s.io image pull registry.k8s.io/kube-apiserver:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-controller-manager:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-scheduler:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-proxy:v1.28.3
ctr -n k8s.io image pull ghcr.io/siderolabs/kubelet:v1.28.3
ctr -n k8s.io image pull ghcr.io/siderolabs/installer:v1.5.5
ctr -n k8s.io image pull registry.k8s.io/pause:3.6
```
* 檢視 talos-w2 image
```
$ ctr -n k8s.io image ls
```
* 停止 Containerd process
```
$ kill -9 3768
[1]+ Killed /usr/sbin/containerd
```
* umount talos-w1 檔案系統
```
$ umount /talos-w1/efi
$ umount /talos-w1/boot
$ umount /talos-w1/state
$ umount /talos-w1/EPHEMERAL
```
## 匯入 image 至 talos-w2
* 設定 talos-w2 存放 image 位置
```
$ nano /etc/containerd/config.toml
version = 2
# persistent data location
root = "/talos-w2/EPHEMERAL/lib/containerd/"
```
* 啟動 Containerd
```
$ /usr/sbin/containerd &
$ ps aux | grep -v grep | grep containerd
root 4247 0.2 0.5 1800996 48216 pts/1 Sl 06:36 0:00 /usr/sbin/containerd
```
* 使用 ctr pull image
```
ctr -n k8s.io image pull ghcr.io/siderolabs/flannel:v0.22.1
ctr -n k8s.io image pull ghcr.io/siderolabs/install-cni:v1.5.0-3-gb43c4e4
ctr -n k8s.io image pull registry.k8s.io/coredns/coredns:v1.10.1
ctr -n k8s.io image pull gcr.io/etcd-development/etcd:v3.5.10
ctr -n k8s.io image pull registry.k8s.io/kube-apiserver:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-controller-manager:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-scheduler:v1.28.3
ctr -n k8s.io image pull registry.k8s.io/kube-proxy:v1.28.3
ctr -n k8s.io image pull ghcr.io/siderolabs/kubelet:v1.28.3
ctr -n k8s.io image pull ghcr.io/siderolabs/installer:v1.5.5
ctr -n k8s.io image pull registry.k8s.io/pause:3.6
```
* 檢視 talos-w2 image
```
$ ctr -n k8s.io image ls
```
* 停止 Containerd process
```
$ kill -9 4247
[1]+ Killed /usr/sbin/containerd
```
* umount talos-w2 檔案系統
```
$ umount /talos-w2/efi
$ umount /talos-w2/boot
$ umount /talos-w2/state
$ umount /talos-w2/EPHEMERAL
```
## 檢視 disk
```
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 550.7M 1 loop /run/overlay/squashfs_container
loop1 7:1 0 3.3G 1 loop /run/overlay/rootfsbase
sda 8:0 0 50G 0 disk
sdb 8:16 0 1T 0 disk
├─sdb1 8:17 0 100M 0 part
├─sdb2 8:18 0 1M 0 part
├─sdb3 8:19 0 1000M 0 part
├─sdb4 8:20 0 1M 0 part
├─sdb5 8:21 0 100M 0 part
└─sdb6 8:22 0 1022.8G 0 part
sdc 8:32 0 1T 0 disk
├─sdc1 8:33 0 100M 0 part
├─sdc2 8:34 0 1M 0 part
├─sdc3 8:35 0 1000M 0 part
├─sdc4 8:36 0 1M 0 part
├─sdc5 8:37 0 100M 0 part
└─sdc6 8:38 0 1022.8G 0 part
sdd 8:48 0 1T 0 disk
├─sdd1 8:49 0 100M 0 part
├─sdd2 8:50 0 1M 0 part
├─sdd3 8:51 0 1000M 0 part
├─sdd4 8:52 0 1M 0 part
├─sdd5 8:53 0 100M 0 part
└─sdd6 8:54 0 1022.8G 0 part
sr0 11:0 1 630.1M 0 rom /run/overlay/live
```
## 將 opensuse vm 關機
```
$ poweroff
```
## 驗收
* 啟動 talos linux (1m2w)
* talos linux 網路模式設為 host-only
* 檢查 talos-m1 service 發現時間同步異常,因為無法上網,所以需要自建 ntp server
```
$ talosctl --nodes 192.168.247.11 --talosconfig=./talosconfig service
NODE SERVICE STATE HEALTH LAST CHANGE LAST EVENT
192.168.247.11 apid Running OK 5m37s ago Health check successful
192.168.247.11 containerd Running OK 5m43s ago Health check successful
192.168.247.11 cri Running OK 5m42s ago Health check successful
192.168.247.11 dashboard Running ? 5m45s ago Process Process(["/sbin/dashboard"]) started with PID 2076
192.168.247.11 etcd Waiting ? 5m41s ago Waiting for time sync
192.168.247.11 kubelet Waiting ? 32s ago Waiting for time sync
192.168.247.11 machined Running OK 5m46s ago Health check successful
192.168.247.11 trustd Waiting ? 5m42s ago Waiting for time sync
192.168.247.11 udevd Running OK 5m45s ago Health check successful
```
### 自建 ntp server
* 使用 sles15-sp5 OS 建立 ntp server
* 需要先開啟 Legacy-Module_15.4-0 這個 module
```
# 安裝 ntp 套件
$ sudo zypper in -y ntp
```
* 設定 ntp server 參數
* 使用本機時間當作來源,並且允許 192.168.247.0 網段來同步
```
$ sudo vim /etc/ntp.conf
server 127.127.1.0 prefer # 修改此行
fudge 127.127.1.0 stratum 10 # 修改此行
##
## Add external Servers using
## # rcntpd addserver <yourserver>
## The servers will only be added to the currently running instance, not
## to /etc/ntp.conf.
##
# Access control configuration; see /usr/share/doc/packages/ntp/html/accopt.html for
# details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.
# By default, exchange time with everybody, but don't allow configuration.
#restrict -4 default notrap nomodify nopeer noquery
restrict default kod nomodify notrap nopeer noquer
restrict -6 default notrap nomodify nopeer noquery
# Local users may interrogate the ntp server more closely.
#restrict 192.168.11.65
restrict 192.168.247.0 mask 255.255.255.0 nomodify # 修改此行
restrict 127.0.0.1 # 修改此行
restrict -6 ::1 # 修改此行
# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
##
## Miscellaneous stuff
##
driftfile /var/lib/ntp/drift/ntp.drift # path for drift file
logfile /var/log/ntp # alternate log file
```
* 啟用 ntp 服務
```
$ sudo systemctl enable ntpd.service --now
```
* 檢查是否開啟 ntp 服務功能
```
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 10 l 40 64 377 0.000 +0.000 0.000
```
### 建立 talos linux k8s
* 修改 ntp 位置

* 檢查 talos linux 時間皆已同步,並且服務皆正常
```
$ talosctl --nodes 192.168.247.11 --talosconfig=./talosconfig service
NODE SERVICE STATE HEALTH LAST CHANGE LAST EVENT
192.168.247.11 apid Running OK 1m1s ago Health check successful
192.168.247.11 containerd Running OK 1m7s ago Health check successful
192.168.247.11 cri Running OK 1m5s ago Health check successful
192.168.247.11 dashboard Running ? 1m8s ago Process Process(["/sbin/dashboard"]) started with PID 2077
192.168.247.11 etcd Preparing ? 1m5s ago Running pre state
192.168.247.11 kubelet Running OK 1m3s ago Health check successful
192.168.247.11 machined Running OK 1m10s ago Health check successful
192.168.247.11 trustd Running OK 1m5s ago Health check successful
192.168.247.11 udevd Running OK 1m8s ago Health check successful
$ talosctl --nodes 192.168.247.21 --talosconfig=./talosconfig service
NODE SERVICE STATE HEALTH LAST CHANGE LAST EVENT
192.168.247.21 apid Running OK 1m23s ago Health check successful
192.168.247.21 containerd Running OK 1m24s ago Health check successful
192.168.247.21 cri Running OK 1m23s ago Health check successful
192.168.247.21 dashboard Running ? 1m26s ago Process Process(["/sbin/dashboard"]) started with PID 2074
192.168.247.21 kubelet Running OK 1m20s ago Health check successful
192.168.247.21 machined Running OK 1m27s ago Health check successful
192.168.247.21 udevd Running OK 1m26s ago Health check successful
$ talosctl --nodes 192.168.247.22 --talosconfig=./talosconfig service
NODE SERVICE STATE HEALTH LAST CHANGE LAST EVENT
192.168.247.22 apid Running OK 1m18s ago Health check successful
192.168.247.22 containerd Running OK 1m23s ago Health check successful
192.168.247.22 cri Running OK 1m23s ago Health check successful
192.168.247.22 dashboard Running ? 1m25s ago Process Process(["/sbin/dashboard"]) started with PID 2074
192.168.247.22 kubelet Running OK 1m21s ago Health check successful
192.168.247.22 machined Running OK 1m26s ago Health check successful
192.168.247.22 udevd Running OK 1m25s ago Health check successful
```
### talos linux k8s 初始化
* 因為 image 已存在 talos linux 因此初始化 k8s 時間會縮短許多
```
$ talosctl --nodes 192.168.247.11 --talosconfig=./talosconfig bootstrap
```
* 生成 config
```
$ talosctl --nodes 192.168.247.11 --talosconfig=./talosconfig kubeconfig
```
* 檢查 k8s 狀態
```
$ kubectl get no
NAME STATUS ROLES AGE VERSION
m1 Ready control-plane 107s v1.28.3
w1 Ready <none> 104s v1.28.3
w2 Ready <none> 101s v1.28.3
```