# Kubernetes - Funcionamento geral ###### tags: `kubernetes` O kubernetes possui alguns componentes que devem ser verificados caso qualquer problema apareça no cluster. ## Componentes Existem diversos outros componentes além dos listados abaixo, entretanto não devemos nos preocupar com eles para manter o cluster rodando. Para maiores informações acesse a [documentação oficial](https://kubernetes.io/docs/concepts/overview/components). ### 1. kube-apiserver É o componente responsável por externalizar a API do kubernetes. ### 2. etcd É a base de dados do cluster, toda a informação está guardada nele. ### 3. kubelet O kubelet dentro do cluster kubernetes é o agente que executa no host físico e é responsável por se comunicar com o apiserver para obter as ações que deve executar. ## Ações ### 1. kubelet O primeiro módulo que devemos verificar se está ok é o kubelet. Ele existe em todo host físico do cluster. Você pode utilizar os [comandos básicos](https://hackmd.io/@oAdhuQJ2T8iByJUEQ5jUuQ/S1r-5Y89D) dos nodes para verificar se está tudo ok. ```bash= $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME pdt08 Ready master 108d v1.18.6 10.120.2.106 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.12 pdt09 Ready <none> 108d v1.18.6 10.120.2.107 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.12 pdt10 Ready <none> 108d v1.18.6 10.120.2.108 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.12 ``` A coluna **STATUS** no exemplo acima indica que os três nós estão funcionando corretamente. Dentro do host físico você pode executar o comando abaixo para verificar o status do comando. ```bash= $ systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Tue 2020-08-04 16:12:10 -03; 3 months 24 days ago Docs: https://kubernetes.io/docs/ Main PID: 45144 (kubelet) Tasks: 93 Memory: 100.1M CGroup: /system.slice/kubelet.service └─45144 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/... Nov 14 16:44:17 pdt08 kubelet[45144]: E1114 16:44:17.110670 45144 remote_runtime.go:295] ContainerStatus "6bf12d8b0ca30fb739015c7b3a1b807e79b01...52b845076 Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.110751 45144 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b7...def75b766 Nov 14 16:44:17 pdt08 kubelet[45144]: E1114 16:44:17.111434 45144 remote_runtime.go:295] ContainerStatus "b722ac2677cd46beaab1d8aebbdfcaca802e4...def75b766 Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.213015 45144 reconciler.go:196] operationExecutor.UnmountVolume started for volume "volume-robertog... Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.233450 45144 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io... Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.313783 45144 reconciler.go:312] operationExecutor.UnmountDevice started for volume "pvc-65...e "pdt08" Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.499123 45144 iscsi_util.go:731] iscsi: log out target 10.97.146.213:3260 iqn iqn.2016-09.c...e default Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.516089 45144 iscsi_util.go:739] iscsi: delete node record target 10.97.146.213:3260 iqn iq...9040af218 Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.522494 45144 operation_generator.go:869] UnmountDevice succeeded for volume "pvc-655ee923-cb03-4d1e... Nov 14 16:44:17 pdt08 kubelet[45144]: I1114 16:44:17.615038 45144 reconciler.go:319] Volume detached for volume "pvc-655ee923-cb03-4d1e-92f7-df89040af21... Hint: Some lines were ellipsized, use -l to show in full. ``` O kubelet é gerenciado através do systemd e funciona da mesma forma que qualquer outro serviço. Portanto para reiniciar o serviço podemos utilizar o comando abaixo. ```bash= $ systemctl restart kubelet ``` ### kube-apiserver e etcd Esses dois componentes são iniciados através do kubelet, podemos verificar se eles estão rodando de forma correta através dos pods do kubernetes, entretanto se o cluster tiver algum problema essa forma não será possível, portanto podemos também inspecionar os containers do docker. Abaixo está um exemplo de como listar os componentes do core do kubernetes. Todos eles ficam dentro do namespace `kube-system`. ```bash= $ kubectl get pods -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-66bff467f8-7g6xg 1/1 Running 0 115d 192.168.4.66 pdt08 <none> <none> coredns-66bff467f8-bwh55 1/1 Running 0 115d 192.168.4.65 pdt08 <none> <none> etcd-pdt08 1/1 Running 0 115d 10.120.2.106 pdt08 <none> <none> kube-apiserver-pdt08 1/1 Running 0 115d 10.120.2.106 pdt08 <none> <none> kube-controller-manager-pdt08 1/1 Running 0 115d 10.120.2.106 pdt08 <none> <none> kube-proxy-hqrbz 1/1 Running 0 115d 10.120.2.107 pdt09 <none> <none> kube-proxy-nlf9g 1/1 Running 0 115d 10.120.2.108 pdt10 <none> <none> kube-proxy-wwktk 1/1 Running 0 115d 10.120.2.106 pdt08 <none> <none> kube-scheduler-pdt08 1/1 Running 0 115d 10.120.2.106 pdt08 <none> <none> ``` Caso o cluster esteja com problema, pode ser que o comando acima não funcione corretamente, caso esse seja o caso, utilize os comandos abaixo. ```bash= # etcd $ docker ps -f name=etcd CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b9dfd0a04226 303ce5db0e90 "etcd --advertise-cl…" 3 months ago Up 3 months k8s_etcd_etcd-pdt08_kube-system_75028e0b3e4bc53d8f590582c7d5a4cc_0 8e0df99f8087 k8s.gcr.io/pause:3.2 "/pause" 3 months ago Up 3 months k8s_POD_etcd-pdt08_kube-system_75028e0b3e4bc53d8f590582c7d5a4cc_0 # kube-apiserver $ docker ps -f name=kube-apiserver CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 734f10f9fc53 56acd67ea15a "kube-apiserver --ad…" 3 months ago Up 3 months k8s_kube-apiserver_kube-apiserver-pdt08_kube-system_ce269816cb87a0e9cdc012ae7427e6f8_0 6ca6ecf182be k8s.gcr.io/pause:3.2 "/pause" 3 months ago Up 3 months k8s_POD_kube-apiserver-pdt08_kube-system_ce269816cb87a0e9cdc012ae7427e6f8_0 ``` Podemos também verificar os logs desses dois componentes para verificar se está tudo certo. ```bash= # kube-apiserver $ docker logs -f --tail 128 $(docker ps -f name=kube-apiserver --format '{{.Names}}' | grep -v POD) Trace[1060186500]: [1.256819986s] [1.256469829s] Object stored in database I1121 02:59:36.132884 1 trace.go:116] Trace[178856358]: "Update" url:/api/v1/namespaces/jupyter/configmaps/user-scheduler,user-agent:kube-scheduler/v1.13.12 (linux/amd64) kubernetes/a8b5220/leader-election,client:10.120.2.108 (started: 2020-11-21 02:59:34.87605089 +0000 UTC m=+9359253.351569989) (total time: 1.256782792s): Trace[178856358]: [1.256672269s] [1.256340737s] Object stored in database I1121 02:59:36.133873 1 trace.go:116] Trace[363185012]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-11-21 02:59:35.255700824 +0000 UTC m=+9359253.731219906) (total time: 878.131973ms): Trace[363185012]: [878.131973ms] [878.131973ms] END I1121 02:59:36.134183 1 trace.go:116] Trace[1664725562]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.6 (linux/amd64) kubernetes/dff82dc/system:serviceaccount:kube-system:cronjob-controller,client:10.120.2.106 (started: 2020-11-21 02:59:35.255617175 +0000 UTC m=+9359253.731136254) (total time: 878.492842ms): Trace[1664725562]: [878.28065ms] [878.236552ms] Listing from storage done I1121 03:12:13.934274 1 trace.go:116] Trace[1683369896]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-11-21 03:12:13.388038896 +0000 UTC m=+9360011.863557979) (total time: 546.145129ms): Trace[1683369896]: [546.054742ms] [545.276658ms] Transaction committed I1121 03:12:13.934584 1 trace.go:116] Trace[1731583691]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election,client:10.120.2.106 (started: 2020-11-21 03:12:13.387827361 +0000 UTC m=+9360011.863346433) (total time: 546.705611ms): Trace[1731583691]: [546.586005ms] [546.454803ms] Object stored in database I1121 03:12:13.934764 1 trace.go:116] Trace[1916787481]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election,client:10.120.2.106 (started: 2020-11-21 03:12:13.390314431 +0000 UTC m=+9360011.865833504) (total time: 544.394223ms): Trace[1916787481]: [544.294117ms] [544.283228ms] About to write a response ``` ```bash= # etcd $ docker logs -f --tail 32 $(docker ps -f name=etcd --format '{{.Names}}' | grep -v POD) 2020-11-28 12:43:54.999951 I | mvcc: store.index: compact 46458606 2020-11-28 12:43:55.028089 I | mvcc: finished scheduled compaction at 46458606 (took 26.959497ms) 2020-11-28 12:48:55.007376 I | mvcc: store.index: compact 46460073 2020-11-28 12:48:55.036837 I | mvcc: finished scheduled compaction at 46460073 (took 28.184239ms) 2020-11-28 12:53:55.014143 I | mvcc: store.index: compact 46461540 2020-11-28 12:53:55.044014 I | mvcc: finished scheduled compaction at 46461540 (took 28.837844ms) 2020-11-28 12:58:29.769561 I | etcdserver: start to snapshot (applied: 49575550, lastsnap: 49565549) 2020-11-28 12:58:29.771723 I | etcdserver: saved snapshot at index 49575550 2020-11-28 12:58:29.772155 I | etcdserver: compacted raft log at 49570550 2020-11-28 12:58:36.178394 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000002f3b329.snap successfully 2020-11-28 12:58:55.020101 I | mvcc: store.index: compact 46463003 2020-11-28 12:58:55.049995 I | mvcc: finished scheduled compaction at 46463003 (took 28.68868ms) 2020-11-28 13:03:55.026123 I | mvcc: store.index: compact 46464470 2020-11-28 13:03:55.054410 I | mvcc: finished scheduled compaction at 46464470 (took 27.171409ms) 2020-11-28 13:08:55.031426 I | mvcc: store.index: compact 46465941 2020-11-28 13:08:55.059431 I | mvcc: finished scheduled compaction at 46465941 (took 27.045341ms) 2020-11-28 13:13:55.037578 I | mvcc: store.index: compact 46467404 2020-11-28 13:13:55.065798 I | mvcc: finished scheduled compaction at 46467404 (took 27.119789ms) 2020-11-28 13:16:46.050794 I | wal: segmented wal file /var/lib/etcd/member/wal/000000000000033e-0000000002f48ce4.wal is created 2020-11-28 13:16:59.562382 I | pkg/fileutil: purged file /var/lib/etcd/member/wal/0000000000000339-0000000002f03741.wal successfully 2020-11-28 13:18:55.043776 I | mvcc: store.index: compact 46468907 2020-11-28 13:18:55.080215 I | mvcc: finished scheduled compaction at 46468907 (took 35.097199ms) 2020-11-28 13:23:55.049253 I | mvcc: store.index: compact 46470370 2020-11-28 13:23:55.077129 I | mvcc: finished scheduled compaction at 46470370 (took 26.744679ms) 2020-11-28 13:28:55.053940 I | mvcc: store.index: compact 46471841 2020-11-28 13:28:55.092739 I | mvcc: finished scheduled compaction at 46471841 (took 37.564316ms) 2020-11-28 13:30:27.713636 I | etcdserver: start to snapshot (applied: 49585551, lastsnap: 49575550) 2020-11-28 13:30:27.716371 I | etcdserver: saved snapshot at index 49585551 2020-11-28 13:30:27.717036 I | etcdserver: compacted raft log at 49580551 2020-11-28 13:30:36.200483 I | pkg/fileutil: purged file /var/lib/etcd/member/snap/0000000000000002-0000000002f3da3a.snap successfully 2020-11-28 13:33:55.060013 I | mvcc: store.index: compact 46473308 2020-11-28 13:33:55.090304 I | mvcc: finished scheduled compaction at 46473308 (took 28.616921ms) ```