# 君k8s得意って言っていたよね?
# 問題文
職場でkubernetesを用いてRedmineを導入することになった。
上司が構築作業をしていたが、どうもうまくRedmineが起動しないらしい。
部下のあなたがk8sが得意ということだったので、構築作業の続きをすることになった。
kubernetesが動作するサーバーにRedmine用のManifestが適用されているが、どうも正常起動していないようだ。
原因を究明を行い、Redmineを使える状態にして解決方法を報告してください。
## 問題のゴール
- VNCクライアントのブラウザからRedmineが閲覧できること。
`http://192.168.0.100:30000`
- Redmineのデータがコンテナ再起動時にも保持されていること。
## 情報
- Server:
- k8smaster1:
- ip: 192.168.0.100
- userid: root
- password: USerPw@19
- container-registry:
- ip: 192.168.0.101
- 備考: 操作不可
- Redmine_Manifest:
- path: "/root/ictsc_problem_manifests/*.yaml"
- Redmineログイン情報
- userid: ictsc
- password: USerPw@19
## 制限事項
- Redmineは指定のManifest(Redmine_Manifest)でデプロイしてください。
- Redmine_Manifestは変更出来ません。
- Redmine_Manifest内のコンテナイメージはcontainer-registryから取得してください。
- マニフェストの再適用, OSの再起動の操作は可能です。
- 誤操作等で競技続行不可の場合は出題時環境への復元のみ承ります。
# 作業
## ログインする
Ubuntuに入って,更にk8sのマスターサーバーに入る
とりあえず,yaml2つをapplyする
わからん
VNCで見てみる
ダメ
:::spoiler
```
[root@k8smaster1 ictsc_problem_manifests]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-release-mariadb-0 0/1 Pending 0 13d
my-release-redmine-859cf77958-n95j5 0/1 Pending 0 13d
[root@k8smaster1 ictsc_problem_manifests]# kubectl describe my-release-redmine-859cf77958-n95j5
error: the server doesn't have a resource type "my-release-redmine-859cf77958-n95j5"
[root@k8smaster1 ictsc_problem_manifests]# kubectl get my-release-redmine-859cf77958-n95j5
error: the server doesn't have a resource type "my-release-redmine-859cf77958-n95j5"
[root@k8smaster1 ictsc_problem_manifests]# kubectl describe pod my-release-redmine-859cf77958-n95j5
Name: my-release-redmine-859cf77958-n95j5
Namespace: default
Priority: 0
Node: <none>
Labels: app=my-release-redmine
chart=redmine-13.0.1
pod-template-hash=859cf77958
release=my-release
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/my-release-redmine-859cf77958
Containers:
my-release-redmine:
Image: private-registry.local/bitnami/redmine:4.0.5-debian-9-r8
Port: 3000/TCP
Host Port: 0/TCP
Liveness: http-get http://:http/ delay=300s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
REDMINE_DB_MYSQL: my-release-mariadb
REDMINE_DB_NAME: bitnami_redmine
REDMINE_DB_USERNAME: bn_redmine
REDMINE_DB_PASSWORD: <set to the key 'mariadb-password' in secret 'my-release-mariadb'> Optional: false
REDMINE_USERNAME: ictsc
REDMINE_PASSWORD: <set to the key 'redmine-password' in secret 'my-release-redmine'> Optional: false
REDMINE_EMAIL: user@example.com
REDMINE_LANG: en
Mounts:
/bitnami/redmine from redmine-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dvlbb (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
redmine-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-release-redmine
ReadOnly: false
default-token-dvlbb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dvlbb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 42s (x1518 over 37h) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
```
:::
「had taints that the pod didn't tolerate」
[Master NodeにPodをデプロイするための設定 - Qiita](https://qiita.com/nykym/items/dcc572c21885543d94c8)
これじゃね
:::spoiler
```
[root@k8smaster1 ictsc_problem_manifests]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8smaster1 Ready master 13d v1.15.3
[root@k8smaster1 ictsc_problem_manifests]# kubectl describe node k8smaster1
Name: k8smaster1
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8smaster1
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 23 Nov 2019 19:58:55 +0900
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 05 Dec 2019 23:21:18 +0900 Thu, 05 Dec 2019 23:21:18 +0900 WeaveIsUp Weave pod has set this
MemoryPressure False Sat, 07 Dec 2019 13:18:46 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 07 Dec 2019 13:18:46 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 07 Dec 2019 13:18:46 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 07 Dec 2019 13:18:46 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.0.100
Hostname: k8smaster1
Capacity:
cpu: 4
ephemeral-storage: 16376680Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8173692Ki
pods: 110
Allocatable:
cpu: 3800m
ephemeral-storage: 15092748264
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7571292Ki
pods: 110
System Info:
Machine ID: 398a7b87a02648f197647f533623f011
System UUID: 53F3BC4C-C868-40CA-A16E-07773AB21342
Boot ID: 89cf7763-5d69-4823-9757-f48943d33aa1
Kernel Version: 3.10.0-1062.1.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.11.13-1.rhaos3.11.gitfb88a9c.el7
Kubelet Version: v1.15.3
Kube-Proxy Version: v1.15.3
PodCIDR: 10.233.64.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-74c9d4d795-hj9tn 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 13d
kube-system dns-autoscaler-7d95989447-crv67 20m (0%) 0 (0%) 10Mi (0%) 0 (0%) 13d
kube-system kube-apiserver-k8smaster1 250m (6%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system kube-controller-manager-k8smaster1 200m (5%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system kube-proxy-mm8ld 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system kube-scheduler-k8smaster1 100m (2%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system nodelocaldns-wgq47 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 13d
kube-system weave-net-z97jl 20m (0%) 0 (0%) 0 (0%) 0 (0%) 13d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 790m (20%) 0 (0%)
memory 150Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
[root@k8smaster1 ictsc_problem_manifests]#
```
:::
```
[root@k8smaster1 ictsc_problem_manifests]# kubectl taint node k8smaster1 node-role.kubernetes.io/master:NoSchedule-
node/k8smaster1 untainted
[root@k8smaster1 ictsc_problem_manifests]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-release-mariadb-0 0/1 ErrImagePull 0 13d
my-release-redmine-859cf77958-n95j5 0/1 ErrImagePull 0 13d
```
statusが変わった
DNSは大丈夫そう
```
[root@k8smaster1 ictsc_problem_manifests]# curl -v private-registry.local
* About to connect() to private-registry.local port 80 (#0)
* Trying 192.168.0.101...
```
private-registry.localへのアクセスが,HTTPならいけるがHTTPSだとNo Route To Hostになる
それのせいでイメージのPullができてなさそう
手動でやる
```
[root@k8smaster1 ictsc_problem_manifests]# docker pull private-registry.local/bitnami/mariadb:10.3.20-debian-9-r0
Trying to pull repository private-registry.local/bitnami/mariadb ...
10.3.20-debian-9-r0: Pulling from private-registry.local/bitnami/mariadb
3c9020349340: Pull complete
47e9b8e7eee2: Pull complete
14ed16664f82: Pull complete
0fd01a09a90c: Pull complete
99ddde791065: Pull complete
9d29288d53d5: Pull complete
b4db9c49879c: Pull complete
bfd6122f04c0: Pull complete
d5d739dd4731: Pull complete
7f1646ba4945: Pull complete
Digest: sha256:4867b36293504a98e1cf3fbbae657845cf24b090a610930836088178604f02ca
Status: Downloaded newer image for private-registry.local/bitnami/mariadb:10.3.20-debian-9-r0
```
```
[root@k8smaster1 ictsc_problem_manifests]# docker pull private-registry.local/bitnami/redmine:4.0.5-debian-9-r8
Trying to pull repository private-registry.local/bitnami/redmine ...
4.0.5-debian-9-r8: Pulling from private-registry.local/bitnami/redmine
3c9020349340: Already exists
82e0e0887848: Pull complete
5f0fa81b980d: Pull complete
1d701cb0218d: Pull complete
35b84e5a362c: Pull complete
60480d3280d3: Pull complete
313866d42293: Pull complete
915b8688b3f1: Pull complete
61139c07ff91: Pull complete
9a181294dc4a: Pull complete
cfc737ca3fe2: Pull complete
7a3134abf86e: Pull complete
73c8e046df9c: Pull complete
63d02de79f68: Pull complete
Digest: sha256:5a6c8b284f3df729d92bd16811a8e742687ee6297be670353cbb0ed703af2042
Status: Downloaded newer image for private-registry.local/bitnami/redmine:4.0.5-debian-9-r8
```
```
[root@k8smaster1 ictsc_problem_manifests]# kubectl create secret docker-registry regcred --docker-username='k8smaster1' --docker-password='pass' --docker-server=http://private-registry.local
secret/regcred created
[root@k8smaster1 ictsc_problem_manifests]# kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
serviceaccount/default patched (no change)
```
## リセット
```
[root@k8smaster1 ~]# ictsc_problem_manifests]# kubectl get pod
-bash: ictsc_problem_manifests]#: command not found
[root@k8smaster1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-release-mariadb-0 0/1 Pending 0 13d
my-release-redmine-859cf77958-n95j5 0/1 Pending 0 13d
[root@k8smaster1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8smaster1 Ready master 13d v1.15.3
[root@k8smaster1 ~]# kubectl describe node k8smaster1
Name: k8smaster1
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8smaster1
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 23 Nov 2019 19:58:55 +0900
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sat, 07 Dec 2019 16:09:41 +0900 Sat, 07 Dec 2019 16:09:41 +0900 WeaveIsUp Weave pod has set this
MemoryPressure False Sat, 07 Dec 2019 16:14:07 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 07 Dec 2019 16:14:07 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 07 Dec 2019 16:14:07 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 07 Dec 2019 16:14:07 +0900 Sat, 23 Nov 2019 19:58:50 +0900 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.0.100
Hostname: k8smaster1
Capacity:
cpu: 4
ephemeral-storage: 16376680Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8173700Ki
pods: 110
Allocatable:
cpu: 3800m
ephemeral-storage: 15092748264
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7571300Ki
pods: 110
System Info:
Machine ID: 398a7b87a02648f197647f533623f011
System UUID: E27B54A7-E624-4DFF-9AB4-437C773BB515
Boot ID: 6ec404e0-340d-4dcb-87b2-8d513282ef34
Kernel Version: 3.10.0-1062.1.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.11.13-1.rhaos3.11.gitfb88a9c.el7
Kubelet Version: v1.15.3
Kube-Proxy Version: v1.15.3
PodCIDR: 10.233.64.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-74c9d4d795-hj9tn 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 13d
kube-system dns-autoscaler-7d95989447-crv67 20m (0%) 0 (0%) 10Mi (0%) 0 (0%) 13d
kube-system kube-apiserver-k8smaster1 250m (6%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system kube-controller-manager-k8smaster1 200m (5%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system kube-proxy-mm8ld 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system kube-scheduler-k8smaster1 100m (2%) 0 (0%) 0 (0%) 0 (0%) 13d
kube-system nodelocaldns-wgq47 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 13d
kube-system weave-net-z97jl 20m (0%) 0 (0%) 0 (0%) 0 (0%) 13d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 790m (20%) 0 (0%)
memory 150Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasNoDiskPressure 13d (x8 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13d (x7 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 13d (x8 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientMemory
Normal Starting 13d kube-proxy, k8smaster1 Starting kube-proxy.
Normal NodeHasSufficientPID 13d kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientPID
Normal Starting 13d kubelet, k8smaster1 Starting kubelet.
Normal NodeAllocatableEnforced 13d kubelet, k8smaster1 Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 13d kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13d kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasNoDiskPressure
Normal Starting 13d kube-proxy, k8smaster1 Starting kube-proxy.
Normal Starting 13d kubelet, k8smaster1 Starting kubelet.
Normal NodeHasSufficientMemory 13d (x8 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13d (x8 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13d (x7 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13d kubelet, k8smaster1 Updated Node Allocatable limit across pods
Normal Starting 13d kube-proxy, k8smaster1 Starting kube-proxy.
Warning ImageGCFailed 13d (x10 over 13d) kubelet, k8smaster1 failed to get imageFs info: non-existent label "crio-images"
Normal Starting 13d kubelet, k8smaster1 Starting kubelet.
Normal NodeHasSufficientMemory 13d (x8 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13d (x8 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13d (x7 over 13d) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13d kubelet, k8smaster1 Updated Node Allocatable limit across pods
Normal Starting 13d kube-proxy, k8smaster1 Starting kube-proxy.
Warning ImageGCFailed 13d kubelet, k8smaster1 failed to get imageFs info: non-existent label "crio-images"
Normal Starting 4m47s kubelet, k8smaster1 Starting kubelet.
Normal NodeHasSufficientMemory 4m47s (x8 over 4m47s) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m47s (x8 over 4m47s) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m47s (x7 over 4m47s) kubelet, k8smaster1 Node k8smaster1 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m47s kubelet, k8smaster1 Updated Node Allocatable limit across pods
Normal Starting 4m34s kube-proxy, k8smaster1 Starting kube-proxy.
[root@k8smaster1 ~]# kubectl taint node k8smaster1 node-role.kubernetes.io/master:NoSchedule-
node/k8smaster1 untainted
[root@k8smaster1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-release-mariadb-0 0/1 ErrImagePull 0 13d
my-release-redmine-859cf77958-n95j5 0/1 ErrImagePull 0 13d
[root@k8smaster1 ~]#
```
関係あるかどうかは不明。恐らくkubernetesが存在しないコンテナを管理している
https://github.com/moby/moby/issues/31655
```
[root@k8smaster1 ~]# systemctl status docker -l
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-12-07 16:40:41 JST; 55s ago
Docs: http://docs.docker.com
Main PID: 1063 (dockerd-current)
CGroup: /system.slice/docker.service
├─1063 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2
└─1154 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc --runtime-args --systemd-cgroup=true
Dec 07 16:41:32 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:32.631091764+09:00" level=error msg="Handler for GET /containers/cbd6d396933f5a96f666bc25da8ad59e79066ef2c76056df94067b719e99d9fc/json returned error: No such container: cbd6d396933f5a96f666bc25da8ad59e79066ef2c76056df94067b719e99d9fc"
Dec 07 16:41:32 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:32.663568394+09:00" level=error msg="Handler for GET /containers/cbd6d396933f5a96f666bc25da8ad59e79066ef2c76056df94067b719e99d9fc/json returned error: No such container: cbd6d396933f5a96f666bc25da8ad59e79066ef2c76056df94067b719e99d9fc"
Dec 07 16:41:34 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:34.597797076+09:00" level=error msg="Handler for GET /containers/199ccf3867303ceacf0524d6f010a6341b89902b9d43c5395ae29c776b643e24/json returned error: No such container: 199ccf3867303ceacf0524d6f010a6341b89902b9d43c5395ae29c776b643e24"
Dec 07 16:41:34 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:34.613985437+09:00" level=error msg="Handler for GET /containers/0858a8e3244f9b8229a1c6d2e7211a7b34f9fb44d5750bfff31627fcf0665f3e/json returned error: No such container: 0858a8e3244f9b8229a1c6d2e7211a7b34f9fb44d5750bfff31627fcf0665f3e"
Dec 07 16:41:34 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:34.632248393+09:00" level=error msg="Handler for GET /containers/199ccf3867303ceacf0524d6f010a6341b89902b9d43c5395ae29c776b643e24/json returned error: No such container: 199ccf3867303ceacf0524d6f010a6341b89902b9d43c5395ae29c776b643e24"
Dec 07 16:41:34 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:34.644283818+09:00" level=error msg="Handler for GET /containers/0858a8e3244f9b8229a1c6d2e7211a7b34f9fb44d5750bfff31627fcf0665f3e/json returned error: No such container: 0858a8e3244f9b8229a1c6d2e7211a7b34f9fb44d5750bfff31627fcf0665f3e"
Dec 07 16:41:35 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:35.653236513+09:00" level=error msg="Handler for GET /containers/0cdd2aa3e292c861df69c096eef85e7415ad6976c835d312e222f7ff58cbdeb2/json returned error: No such container: 0cdd2aa3e292c861df69c096eef85e7415ad6976c835d312e222f7ff58cbdeb2"
Dec 07 16:41:35 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:35.685126291+09:00" level=error msg="Handler for GET /containers/0cdd2aa3e292c861df69c096eef85e7415ad6976c835d312e222f7ff58cbdeb2/json returned error: No such container: 0cdd2aa3e292c861df69c096eef85e7415ad6976c835d312e222f7ff58cbdeb2"
Dec 07 16:41:35 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:35.958337507+09:00" level=error msg="Handler for GET /containers/e48a8c83394171e302492fc24882a4e7b529d0f8aa27671cd392b0e9f59fa07b/json returned error: No such container: e48a8c83394171e302492fc24882a4e7b529d0f8aa27671cd392b0e9f59fa07b"
Dec 07 16:41:35 k8smaster1 dockerd-current[1063]: time="2019-12-07T16:41:35.990574009+09:00" level=error msg="Handler for GET /containers/e48a8c83394171e302492fc24882a4e7b529d0f8aa27671cd392b0e9f59fa07b/json returned error: No such container: e48a8c83394171e302492fc24882a4e7b529d0f8aa27671cd392b0e9f59fa07b"
```
```
--insecure-registry=private-registry.local \
--registry=private-registry.local
```
https://unicorn.limited/jp/item/662
## 清書
お疲れさまです.
:thonk_spin.ex-large.rotate.parrot:の大橋です.
### 状態の確認
以下のコマンドを実行し,各podのstatusがpendingのままになっていることを確認しました.
```
[root@k8smaster1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
my-release-mariadb-0 0/1 Pending 0 13d
my-release-redmine-859cf77958-n95j5 0/1 Pending 0 13d
[root@k8smaster1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8smaster1 Ready master 13d v1.15.3
```
これは,kubernetesのデフォルトの設定ではマスターノードにはpodがデプロイされないようになっているためです.
そこで,その設定を変更し,マスターノードにもpodがデプロイされるようにしました.
```
[root@k8smaster1 ~]# kubectl taint node k8smaster1 node-role.kubernetes.io/master:NoSchedule-
node/k8smaster1 untainted
```
上のように設定を変更すると,2つのpodが`ErrImagePull`となります.
`/etc/kubernetes/kubelet.env`を確認すると,コンテナランタイムとしてcrioを使っており,crioの設定ではinsecure-registryとして`private-registry.local`が指定されていないためでした.
そこで,`/usr/lib/systemd/system/crio.service`の設定を変更し,コマンドの引数として以下を追加しました.
```
--insecure-registry=private-registry.local \
--registry=private-registry.local
```
上記のように設定すると,imageのpullに成功しますが,以下のようなログが表示されmariadbの起動に失敗するようになります.
```
[root@k8smaster1 ictsc_problem_manifests]# kubectl log my-release-mariadb-0
log is DEPRECATED and will be removed in a future version. Use logs instead.
08:34:00.38
08:34:00.38 Welcome to the Bitnami mariadb container
08:34:00.39 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb
08:34:00.39 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb/issues
08:34:00.39 Send us your feedback at containers@bitnami.com
08:34:00.39
08:34:00.40 INFO ==> ** Starting MariaDB setup **
08:34:00.47 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
08:34:00.48 INFO ==> Initializing mariadb database
mkdir: cannot create directory '/bitnami/mariadb/data': Permission denied
```
これは,Persistent Volumeのホスト側の権限に書き込み権限がないためでした.
そのため,ホスト側のディレクトリの`/var/opt`以下に対して書き込み権限を追加しました.
以上の手順で,Redmineを起動することができるようになり,データの永続化も確認できました.