K8s on vSphere
===
> [name=Alexandre CAUSSIGNAC]
> [time=Dimanche 12 Avril 2020]
Préambule
---
Ce post est le premier d'une série de quatre articles axés autour du moteur d'orchestration Kubernetes (K8s) et qui ont pour objectif d'expliquer comment passer d'un mode *Do It Yourself* (DIY) à une plateforme de conteneurs pleinement fonctionnelle, supportée et scalable.
1. **[K8s on vSphere (DIY)](https://hackmd.io/@ac09081979/K8s_on_vSphere)**
1. [K8s on vSphere with Cluster API (DIY)](https://hackmd.io/@ac09081979/K8s_on_vSphere_with_Cluster_API)
1. [Tanzu Kubernetes Grid (TKG) on vSphere](https://hackmd.io/@ac09081979/TKG)
1. [vSphere with K8s (Pacific)](https://hackmd.io/@ac09081979/vSphere_with_K8s)
Introduction
---
Si le conteneur reste au centre des infrastructures modernes, l'attention s'est maintenant portée sur les orchestrateurs dont K8s règne désormais en maître. Parmi ses nombreux avantages, on peut citer son évolutivité quasi infinie via un ensemble d'extensions. L'objectif de celles-ci est de pouvoir faire gérer n'importe quel type d'objet (*Custom Resources Definition*) par K8s ainsi que de permettre à des tiers de contribuer à des fonctions de K8s sans impacter son coeur. L'intérêt majeur est de disposer d'interfaces (calcul, réseau, stockage) pour tout développement spécifique afin de faire évoluer le code source de K8s de manière indépendante et d'offrir par la même occasion une portabilité extrême. A titre d'exemple, une des interfaces la plus utilisée est la *Container Network Interface* car elle est quasiment indispensable pour obtenir un cluster fonctionnel. Il y a donc un ensemble de solutions disponibles (Antrea, Flannel, Calico, NSX-T, ...) qui couvrent plus ou moins de fonctionnalités (Overlay, Load Balancer, Ingress Controller, Network Policy, ...) et qu'il sera nécessaire d'intégrer à votre cluster en fonction de vos choix d'architecture.
K8s est donc un merveilleux outil pour faire tourner des applications mais il peut être complexe à installer et surtout à maintenir. Le but de ce tutoriel est de vous expliquer pas-à-pas comment déployer (à partir de zéro) un cluster K8s fonctionnel sur une plateforme vSphere. Nous en profiterons pour faire tourner deux applications en vue d'aborder les aspects réseaux, stockage ainsi que sauvegarde et restauration.
Environnement de Lab
---
Ce lab est un environnement dit *nested*. Il y a quatre serveurs physiques sur lesquels j'ai déployé quatre hyperviseurs vSphere (en machines virtuelles) et un vCenter Server Appliance (VCSA). Pour faire ça, j'ai utilisé un super outil appelé [cPodFactory](https://github.com/bdereims/cPodFactory) qui a été créé et est maintenu par mon collègue [Brice DEREIMS](https://fr.linkedin.com/in/brice-dereims-65a3a4). Même s'il n'est pas possible de l'utiliser en production pour des raisons de support, il reste très utile pour tous types de tests.
> Si vous disposez déjà d'un environnement vSphere équivalent, vous pouvez vous rendre directement à la section suivante.
Voici les produits et versions utilisés durant ce lab:
| Product | Version |
| -------------------------------- | ------------------------------------- |
| VMware vSphere Hypervisor (ESXi) | 6.7 Update 3 (Build Number: 14320388) |
| VMware vCenter Server Appliance | 6.7 Update 3 (Build Number: 14367737) |
| Ubuntu | 18.04.4 LTS |
| govc | 0.22.1 |
| Docker | version 19.03.6, build 369ce74a3c |
| kubectl | 1.18.0 |
| Kubernetes | 1.18.0 |
Ainsi que les détails réseaux sur les composants installés:
| IP | FQDN |
| ----------- | ---------------------------------------- |
| 172.20.6.3 | vcsa.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.21 | esx-01.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.22 | esx-02.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.23 | esx-03.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.24 | esx-04.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.30 | admcli.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.40 | haproxy01.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.41 | haproxy02.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.42 | apik8s.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.50 | master01.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.51 | master02.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.52 | master03.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.60 | worker01.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.61 | worker02.cpod-aca-k8s.az-lab.shwrfr.com |
| 172.20.6.62 | worker03.cpod-aca-k8s.az-lab.shwrfr.com |
| Range/Subnet | Function |
| ------------------------- | ------------ |
| 172.20.6.100-172.20.6.199 | K8s services |
| 172.20.6.200-172.20.6.254 | DHCP |
Préparation du Lab
---
- [ ] Créer un cPod (appelé ACA-K8S dans ce tutoriel)
- [ ] Déployer un VCSA
- [ ] Récupérer le mot de passe généré
- [ ] Se connecter au vCenter et mettre à jour les licences
- [ ] Mettre à jour la table host du cPod router
> Le cPod router (basé sur PhotonOS) offre ces différents services: BGP, DHCP, DNS, L3 gateway, NFS server, routing.
```
root@cpodrouter-aca-k8s [ ~ ]# cat /etc/hosts
# Begin /etc/hosts (network card version)
::1 localhost ipv6-localhost ipv6-loopback
127.0.0.1 localhost.localdomain
127.0.0.1 localhost
172.20.6.1 cpodrouter cpod
# End /etc/hosts (network card version)
172.20.6.2 cpodfiler
172.20.6.24 esx-04
172.20.6.23 esx-03
172.20.6.22 esx-02
172.20.6.21 esx-01
172.20.6.3 vcsa
172.20.6.30 admcli
172.20.6.40 haproxy01
172.20.6.41 haproxy02
172.20.6.42 apik8s
172.20.6.50 master01
172.20.6.51 master02
172.20.6.52 master03
172.20.6.60 worker01
172.20.6.61 worker02
172.20.6.62 worker03
```
- [ ] Ajouter un *wildcard* DNS pour tous les services K8s de type Ingress (dernière ligne)
```
root@cpodrouter-aca-k8s [ ~ ]# cat /etc/dnsmasq.conf
listen-address=127.0.0.1,172.20.6.1,172.16.2.16
interface=lo,eth0,eth1
bind-interfaces
expand-hosts
bogus-priv
#dns-forward-max=150
#cache-size=1000
domain=cpod-aca-k8s.az-lab.shwrfr.com
local=/cpod-aca-k8s.az-lab.shwrfr.com/
server=/az-lab.shwrfr.com/172.16.2.1
server=172.16.2.1
no-dhcp-interface=lo,eth1
dhcp-range=172.20.6.200,172.20.6.254,255.255.255.0,12h
dhcp-option=option:router,172.20.6.1
dhcp-option=option:ntp-server,172.20.6.1
dhcp-option=option:domain-search,cpod-aca-k8s.az-lab.shwrfr.com
address=/.apps.cpod-aca-k8s.az-lab.shwrfr.com/172.20.6.100
root@cpodrouter-aca-k8s [ ~ ]# systemctl restart dnsmasq.service
```
- [ ] Ajouter les *neighbors* (worker nodes) à la configuration BGP du cPod router (cf section sur metallb)
> Vous n'êtes pas obligé d'utiliser BGP mais vous pouvez utiliser du routage statique.
```
root@cpodrouter-aca-k8s [ ~ ]# cat /etc/quagga/bgpd.conf
! -*- bgp -*-
!
! BGPd sample configuratin file
!
! $Id: bgpd.conf.sample,v 1.1 2002/12/13 20:15:29 paul Exp $
!
hostname bgpd
password VMware1!
enable password VMware1!
!
!bgp mulitple-instance
!
router bgp 65216
bgp router-id 172.16.2.16
neighbor 172.16.2.1 remote-as 65200
neighbor 172.20.6.60 remote-as 65261
neighbor 172.20.6.61 remote-as 65261
neighbor 172.20.6.62 remote-as 65261
!neighbor 172.16.66.10 default-originate
!redistribute kernel
!redistribute static
redistribute connected
! neighbor 10.0.0.2 route-map set-nexthop out
! neighbor 172.16.0.2 ebgp-multihop
! neighbor 10.0.0.2 next-hop-self
!
access-list all permit any
!
!route-map set-nexthop permit 10
! match ip address all
! set ip next-hop 10.0.0.1
!
log file bgpd.log
!
log stdout
root@cpodrouter-aca-k8s [ ~ ]# systemctl restart bgpd.service
```
- [ ] Activer les fonctionnalités DRS et HA (en ajoutant les options ci-dessous) sur le cluster vSphere
| Option | Value |
| --------------------------------- | ----- |
| das.ignoreInsufficientHbDatastore | TRUE |
| das.ignoreRedundantNetWarning | TRUE |
Installation des machines virtuelles (administration, load balancer, master & workers nodes) basées sur Ubuntu 18.04.4 LTS
---
- [ ] Déployer les machines virtuelles et les classer dans un répertoire nommé K8s

- [ ] Fixer les adresses IP de chaque machine virtuelle
```
root@admcli:~# cat /etc/netplan/01-netcfg.yaml
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
version: 2
renderer: networkd
ethernets:
ens160:
addresses: [172.20.6.30/24]
gateway4: 172.20.6.1
nameservers:
addresses: [172.20.6.1]
search: [cpod-aca-k8s.az-lab.shwrfr.com]
dhcp4: no
root@admcli:~# cat /etc/hosts
127.0.0.1 localhost
172.20.6.30 admcli
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
- [ ] Créer des règles DRS d'anti-affinité (load balancer, master, worker)

---

---

- [ ] Eteindre les machines virtuelles, mettre à jour leur compatibilité matérielle en **version 15**, changer le contrôleur SCSI en **VMware Paravirtual**, ajouter le paramètre avancé **disk.EnableUUID** à **TRUE** et démarrer les machines virtuelles

---

---

Configurer la partie Load Balancer
---
- [ ] Installer HA Proxy sur HAPROXY01 et HAPROXY02
```
root@haproxy01:~# apt install -y haproxy
root@haproxy01:~# cat /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend apik8s
bind 0.0.0.0:6443
option tcplog
mode tcp
default_backend apik8s-master-nodes
backend apik8s-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master01 172.20.6.50:6443 check fall 3 rise 2
server master02 172.20.6.51:6443 check fall 3 rise 2
server master03 172.20.6.52:6443 check fall 3 rise 2
listen stats
bind *:1936
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats auth admin:password
stats uri /stats
root@haproxy01:~# systemctl restart haproxy.service
```

---

- [ ] Installer Keepalived sur HAPROXY01 et HAPROXY02 en adaptant la configuration de HAPROXY02 (notification_email_from, state, priority lower than master, unicast_src_ip et unicast_peer )
```
root@haproxy01:~# apt install -y keepalived
root@haproxy01:~# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
sysadmin@cpod-aca-k8s.az-lab.shwrfr.com
support@cpod-aca-k8s.az-lab.shwrfr.com
}
notification_email_from haproxy01@cpod-aca-k8s.az-lab.shwrfr.com
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance apik8s {
state MASTER
interface ens160
virtual_router_id 42
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 172.20.6.40
unicast_peer {
172.20.6.41
}
virtual_ipaddress {
172.20.6.42
}
}
root@haproxy02:~# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
sysadmin@cpod-aca-k8s.az-lab.shwrfr.com
support@cpod-aca-k8s.az-lab.shwrfr.com
}
notification_email_from haproxy02@cpod-aca-k8s.az-lab.shwrfr.com
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance apik8s {
state SLAVE
interface ens160
virtual_router_id 42
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 172.20.6.41
unicast_peer {
172.20.6.40
}
virtual_ipaddress {
172.20.6.42
}
}
root@haproxy01:~# systemctl restart keepalived.service
root@haproxy01:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:50:56:b6:83:11 brd ff:ff:ff:ff:ff:ff
inet 172.20.6.40/24 brd 172.20.6.255 scope global ens160
valid_lft forever preferred_lft forever
inet 172.20.6.42/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:feb6:8311/64 scope link
valid_lft forever preferred_lft forever
acaussignac@tokyo ~ % ping -c3 apik8s.cpod-aca-k8s.az-lab.shwrfr.com
PING apik8s.cpod-aca-k8s.az-lab.shwrfr.com (172.20.6.42): 56 data bytes
64 bytes from 172.20.6.42: icmp_seq=0 ttl=61 time=8.425 ms
64 bytes from 172.20.6.42: icmp_seq=1 ttl=61 time=12.358 ms
64 bytes from 172.20.6.42: icmp_seq=2 ttl=61 time=11.784 ms
--- apik8s.cpod-aca-k8s.az-lab.shwrfr.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 8.425/10.856/12.358/1.735 ms
```
> Si vous souhaitez tester la bascule (*failover*), il vous suffit de redémarrer HAPROXY01 et de vérifier que l'adresse IP dite "virtuelle" pointant sur l'API K8s de notre cluster (172.20.6.42) bascule sur HAPROXY02 et revient sur HAPROXY01 une fois ce dernier redémarré.
Préparer la machine virtuelle d'administration
---
- [ ] Installer govc sur ADMCLI
```
root@admcli:~# curl -L https://github.com/vmware/govmomi/releases/download/v0.22.1/govc_linux_amd64.gz | gunzip > /usr/local/bin/govc
root@admcli:~# chmod +x /usr/local/bin/govc
root@admcli:~# exit
acaussignac@admcli:~$ cat .govc
export GOVC_INSECURE=true
export GOVC_URL="https://vcsa.cpod-aca-k8s.az-lab.shwrfr.com"
export GOVC_USERNAME="administrator@cpod-aca-k8s.az-lab.shwrfr.com"
export GOVC_PASSWORD="password"
acaussignac@admcli:~$ source .govc
acaussignac@admcli:~$ govc ls
/cPod-ACA-K8S/vm
/cPod-ACA-K8S/network
/cPod-ACA-K8S/host
/cPod-ACA-K8S/datastore
```
- [ ] Installer kubectl sur ADMCLI
```
root@admcli:~# apt install -y apt-transport-https curl
root@admcli:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
root@admcli:~# cat /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
root@admcli:~# apt update
root@admcli:~# apt install -y kubectl
root@admcli:~# apt-mark hold kubectl
```
Preparer les master et worker nodes
---
```
root@master01:~# swapoff -a
root@master01:~# sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
root@master01:~# apt install -y apt-transport-https curl gnupg2
root@master01:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
root@master01:~# cat /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
root@master01:~# apt update
root@master01:~# apt install -y kubelet kubeadm kubectl
root@master01:~# apt-mark hold kubelet kubeadm kubectl
root@master01:~# apt install -y docker.io
root@master01:~# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
root@master01:~# systemctl restart docker
root@master01:~# systemctl enable docker
```
Installer K8s sur les master & workers nodes
---
- [ ] Initialiser le plan de contrôle et attacher les noeuds afférents
```
root@master01:~# kubeadm init --control-plane-endpoint "apik8s.cpod-aca-k8s.az-lab.shwrfr.com:6443" --upload-certs --pod-network-cidr 192.168.0.0/16
root@master02:~# kubeadm join apik8s.cpod-aca-k8s.az-lab.shwrfr.com:6443 --token zuc849.8ydnhcuj7x0887yr --discovery-token-ca-cert-hash sha256:fd0b55683c77f488404e5cce20617e70650d5db6ed393932f442ab8d6faf0662 --control-plane --certificate-key 9720da4e183473b937267748ccb9fdeccf7cb1476f92bbc8560db2423c5d3f52
root@master03:~# kubeadm join apik8s.cpod-aca-k8s.az-lab.shwrfr.com:6443 --token zuc849.8ydnhcuj7x0887yr --discovery-token-ca-cert-hash sha256:fd0b55683c77f488404e5cce20617e70650d5db6ed393932f442ab8d6faf0662 --control-plane --certificate-key 9720da4e183473b937267748ccb9fdeccf7cb1476f92bbc8560db2423c5d3f52
root@worker01:~# kubeadm join apik8s.cpod-aca-k8s.az-lab.shwrfr.com:6443 --token zuc849.8ydnhcuj7x0887yr --discovery-token-ca-cert-hash sha256:fd0b55683c77f488404e5cce20617e70650d5db6ed393932f442ab8d6faf0662
root@worker02:~# kubeadm join apik8s.cpod-aca-k8s.az-lab.shwrfr.com:6443 --token zuc849.8ydnhcuj7x0887yr --discovery-token-ca-cert-hash sha256:fd0b55683c77f488404e5cce20617e70650d5db6ed393932f442ab8d6faf0662
root@worker03:~# kubeadm join apik8s.cpod-aca-k8s.az-lab.shwrfr.com:6443 --token zuc849.8ydnhcuj7x0887yr --discovery-token-ca-cert-hash sha256:fd0b55683c77f488404e5cce20617e70650d5db6ed393932f442ab8d6faf0662
```
- [ ] Récupérer le fichier /etc/kubernetes/admin.conf sur ADMCLI
```
acaussignac@admcli:~$ mkdir -p $HOME/.kube
acaussignac@admcli:~$ vi .kube/config
acaussignac@admcli:~$ chown $(id -u):$(id -g) $HOME/.kube/config
acaussignac@admcli:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady master 16m v1.18.0
master02 NotReady master 11m v1.18.0
master03 NotReady master 10m v1.18.0
worker01 NotReady <none> 8m51s v1.18.0
worker02 NotReady <none> 7m48s v1.18.0
worker03 NotReady <none> 7m3s v1.18.0
```

---

Installer et configurer la partie réseau (*Container Network Interface*)
---
- [ ] Installer le SDN Antrea pour disposer de la fonctionnalité overlay
```
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea.yml
acaussignac@admcli:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 26m v1.18.0
master02 Ready master 21m v1.18.0
master03 Ready master 19m v1.18.0
worker01 Ready <none> 18m v1.18.0
worker02 Ready <none> 17m v1.18.0
worker03 Ready <none> 16m v1.18.0
```
- [ ] Installer Metallb pour apporter à K8s la fonctionnalité de Load Balancer as a Service
```
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
acaussignac@admcli:~$ cat metallb-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- peer-address: 172.20.6.1
peer-asn: 65216
my-asn: 65261
address-pools:
- name: default
protocol: bgp
addresses:
- 172.20.6.100-172.20.6.199
acaussignac@admcli:~$ kubectl apply -f metallb-configmap.yaml
```
> Vous n'êtes pas obligé d'utiliser BGP avec metallb mais simplement configurer une plage IP. Voici un exemple de configuration sans utiliser BGP.
```
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.6.100-172.20.6.199
```
- [ ] Vérifier la connectivité BGP sur le cPod router
```
cpodrouter-aca-k8s# show ip bgp summary
BGP router identifier 172.16.2.16, local AS number 65216
RIB entries 171, using 19 KiB of memory
Peers 4, using 36 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/P
fxRcd
172.16.2.1 4 65200 402 376 0 0 0 06:10:50 86
172.20.6.60 4 65261 1 31 0 0 0 00:00:55 0
172.20.6.61 4 65261 1 31 0 0 0 00:00:55 0
172.20.6.62 4 65261 1 31 0 0 0 00:00:54 0
Total number of neighbors 4
Total num. Established sessions 4
Total num. of routes received 86
```
- [ ] Déployer Contour afin d'offrir à K8s un Ingress controller (voir plus avec un objet de type HTTPProxy)
```
acaussignac@admcli:~$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
acaussignac@admcli:~$ kubectl apply -f https://projectcontour.io/examples/proxydemo/01-prereq.yaml
````
- [ ] Déployer une première application sans persistance de données pour tester les règles d'Ingress
```
acaussignac@admcli:~$ cat contour-httpproxy.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: root
namespace: projectcontour-roots
spec:
virtualhost:
fqdn: rootapp.apps.cpod-aca-k8s.az-lab.shwrfr.com
routes:
- services:
- name: rootapp
port: 80
conditions:
- prefix: /
- services:
- name: secureapp
port: 80
conditions:
- prefix: /secure
- services:
- name: secureapp-default
port: 80
conditions:
- prefix: /securedefault
acaussignac@admcli:~$ kubectl apply -f contour-httpproxy.yaml
acaussignac@admcli:~$ kubectl get HTTPProxy -n projectcontour-roots
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
root rootapp.apps.cpod-aca-k8s.az-lab.shwrfr.com valid valid HTTPProxy
```

---

---

Installer et configurer la partie stockage (*Container Storage Interface*)
---
- [ ] Installer vSphere Cloud Controller Manager
```
root@master01:~# cat /etc/kubernetes/vsphere.conf
[Global]
insecure-flag = "true"
[VirtualCenter "172.20.6.3"]
user = "administrator@cpod-aca-k8s.az-lab.shwrfr.com"
password = "password"
port = "443"
datacenters = "cPod-ACA-K8S"
[Network]
public-network = "VM Network"
root@master01:~# cd /etc/kubernetes/
root@master01:/etc/kubernetes# kubectl --kubeconfig /etc/kubernetes/admin.conf create configmap cloud-config --from-file=vsphere.conf --namespace=kube-system
root@master01:/etc/kubernetes# rm vsphere.conf
acaussignac@admcli:~$ kubectl get configmap cloud-config --namespace=kube-system
NAME DATA AGE
cloud-config 1 41s
acaussignac@admcli:~$ kubectl describe nodes | egrep "Taints:|Name:"
Name: master01
Taints: node-role.kubernetes.io/master:NoSchedule
Name: master02
Taints: node-role.kubernetes.io/master:NoSchedule
Name: master03
Taints: node-role.kubernetes.io/master:NoSchedule
Name: worker01
Taints: <none>
Name: worker02
Taints: <none>
Name: worker03
Taints: <none>
acaussignac@admcli:~$ kubectl taint nodes worker01 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node/worker01 tainted
acaussignac@admcli:~$ kubectl taint nodes worker02 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node/worker02 tainted
acaussignac@admcli:~$ kubectl taint nodes worker03 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node/worker03 tainted
acaussignac@admcli:~$ kubectl describe nodes | egrep "Taints:|Name:"
Name: master01
Taints: node-role.kubernetes.io/master:NoSchedule
Name: master02
Taints: node-role.kubernetes.io/master:NoSchedule
Name: master03
Taints: node-role.kubernetes.io/master:NoSchedule
Name: worker01
Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Name: worker02
Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Name: worker03
Taints: node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
acaussignac@admcli:~$ kubectl apply -f https://github.com/kubernetes/cloud-provider-vsphere/raw/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml
acaussignac@admcli:~$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
antrea-agent-6rsvj 2/2 Running 0 40m
antrea-agent-fhfc5 2/2 Running 0 40m
antrea-agent-gd2sk 2/2 Running 1 40m
antrea-agent-glsqf 2/2 Running 1 40m
antrea-agent-m48p6 2/2 Running 1 40m
antrea-agent-tggwz 2/2 Running 1 40m
antrea-controller-598956597f-nr9hj 1/1 Running 0 40m
coredns-66bff467f8-ht2qk 1/1 Running 0 64m
coredns-66bff467f8-kqflb 1/1 Running 0 64m
etcd-master01 1/1 Running 0 64m
etcd-master02 1/1 Running 0 60m
etcd-master03 1/1 Running 0 58m
kube-apiserver-master01 1/1 Running 0 64m
kube-apiserver-master02 1/1 Running 0 60m
kube-apiserver-master03 1/1 Running 0 58m
kube-controller-manager-master01 1/1 Running 1 64m
kube-controller-manager-master02 1/1 Running 0 60m
kube-controller-manager-master03 1/1 Running 0 58m
kube-proxy-58rqn 1/1 Running 0 60m
kube-proxy-cvgzq 1/1 Running 0 57m
kube-proxy-m2594 1/1 Running 0 56m
kube-proxy-tbrsv 1/1 Running 0 64m
kube-proxy-v56vp 1/1 Running 0 55m
kube-proxy-x4dx8 1/1 Running 0 58m
kube-scheduler-master01 1/1 Running 1 64m
kube-scheduler-master02 1/1 Running 0 60m
kube-scheduler-master03 1/1 Running 0 58m
vsphere-cloud-controller-manager-h6nvt 1/1 Running 0 5m57s
vsphere-cloud-controller-manager-lp27v 1/1 Running 0 5m57s
vsphere-cloud-controller-manager-tf6p8 1/1 Running 0 5m57s
acaussignac@admcli:~$ kubectl describe nodes | egrep "Taints:|Name:"
Name: master01
Taints: node-role.kubernetes.io/master:NoSchedule
Name: master02
Taints: node-role.kubernetes.io/master:NoSchedule
Name: master03
Taints: node-role.kubernetes.io/master:NoSchedule
Name: worker01
Taints: <none>
Name: worker02
Taints: <none>
Name: worker03
Taints: <none>
acaussignac@admcli:~$ kubectl describe nodes | grep ProviderID
ProviderID: vsphere://42361db0-7b20-d688-6da6-7dd338dbe056
ProviderID: vsphere://423628ab-43e3-8b04-cba5-3d2feba30779
ProviderID: vsphere://42362a71-6154-1cae-749b-5d83b2241119
```
- [ ] Installer vSphere Cloud Storage Interface Driver
```
root@master01:~# cat /etc/kubernetes/csi-vsphere.conf
[Global]
cluster-id = "demo-cluster-id"
[VirtualCenter "172.20.6.3"]
insecure-flag = "true"
user = "administrator@cpod-aca-k8s.az-lab.shwrfr.com"
password = "password"
port = "443"
datacenters = "cPod-ACA-K8S"
````
> *cluster-id represents the unique cluster identifier. Each kubernetes cluster should have it's own unique cluster-id set in the csi-vsphere.conf file.*
```
root@master01:~# cd /etc/kubernetes/
root@master01:/etc/kubernetes# kubectl --kubeconfig ./admin.conf create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system
root@master01:/etc/kubernetes# rm csi-vsphere.conf
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/1.14/rbac/vsphere-csi-controller-rbac.yaml
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/1.14/deploy/vsphere-csi-controller-ss.yaml
acaussignac@admcli:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/manifests/1.14/deploy/vsphere-csi-node-ds.yaml
acaussignac@admcli:~$ kubectl get statefulset --namespace=kube-system
NAME READY AGE
vsphere-csi-controller 1/1 34s
acaussignac@admcli:~$ kubectl get daemonsets vsphere-csi-node --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
vsphere-csi-node 3 3 3 3 3 <none> 44s
acaussignac@admcli:~$ kubectl get CSINode
NAME DRIVERS AGE
master01 0 138m
master02 0 134m
master03 0 132m
worker01 1 131m
worker02 1 130m
worker03 1 129m
acaussignac@admcli:~$ kubectl get csidrivers
NAME ATTACHREQUIRED PODINFOONMOUNT MODES AGE
csi.vsphere.vmware.com true false Persistent 93s
```
- [ ] Créer un tag vSphere nommé K8s et l'assigner au datastore adéquat

---

- [ ] Créer une vSphere Storage Policy basée sur le tag créé précédemment

---

---

---

---

- [ ] Créer une K8s Storage Class basée sur la Storage Policy vSphere
```
acaussignac@admcli:~$ cat k8s-storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: k8s-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
storagepolicyname: "K8s"
fstype: ext4
acaussignac@admcli:~$ kubectl create -f k8s-storageclass.yaml
acaussignac@admcli:~$ kubectl get sc k8s-sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
k8s-sc (default) csi.vsphere.vmware.com Delete Immediate false 86s
```
Installer l'application Rocketchat (avec persistance de données) afin de tester notre configuration
---
- [ ] Installer helm (packager d'applications basé sur des *marketplace*)
```
acaussignac@admcli:~$ wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
--2020-03-28 19:25:59-- https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
Résolution de get.helm.sh (get.helm.sh)… 152.199.21.175, 2606:2800:233:1cb7:261b:1f9c:2074:3c
Connexion à get.helm.sh (get.helm.sh)|152.199.21.175|:443… connecté.
requête HTTP transmise, en attente de la réponse… 200 OK
Taille : 12269190 (12M) [application/x-tar]
Enregistre : «helm-v3.1.2-linux-amd64.tar.gz»
helm-v3.1.2-linux-amd64.tar.g 100%[===============================================>] 11,70M 70,2MB/s ds 0,2s
2020-03-28 19:25:59 (70,2 MB/s) - «helm-v3.1.2-linux-amd64.tar.gz» enregistré [12269190/12269190]
acaussignac@admcli:~$ tar zxvf helm-v3.1.2-linux-amd64.tar.gz
acaussignac@admcli:~$ sudo mv linux-amd64/helm /usr/local/bin/ && sudo chown root:root /usr/local/bin/helm
acaussignac@admcli:~$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
acaussignac@admcli:~$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories
acaussignac@admcli:~$ helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
"vmware-tanzu" has been added to your repositories
```
> L'ajout des trois repo permettra l'installation des outils/applications ci-dessous.
- [ ] Installer l'application Rocketchat
```
acaussignac@admcli:~$ kubectl create ns rocketchat
namespace/rocketchat created
acaussignac@admcli:~$ cat rocketchat.yaml
persistence:
enabled: true
service:
type: LoadBalancer
mongodb:
mongodbPassword: password
mongodbRootPassword: password
acaussignac@admcli:~$ helm install rocketchat stable/rocketchat --namespace rocketchat -f rocketchat.yaml
acaussignac@admcli:~$ kubectl get pods -n rocketchat
NAME READY STATUS RESTARTS AGE
rocketchat-mongodb-primary-0 1/1 Running 0 3m17s
rocketchat-rocketchat-785f9db498-l7mjp 1/1 Running 3 3m17s
acaussignac@admcli:~$ kubectl get svc -n rocketchat
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rocketchat-mongodb ClusterIP 10.98.133.239 <none> 27017/TCP 3m32s
rocketchat-mongodb-headless ClusterIP None <none> 27017/TCP 3m32s
rocketchat-rocketchat LoadBalancer 10.97.19.202 172.20.6.101 80:30762/TCP 3m32s
acaussignac@admcli:~$ kubectl get pvc -n rocketchat
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-rocketchat-mongodb-primary-0 Bound pvc-4c26c9b4-f26c-4a97-a2d5-ddbab49b83d4 8Gi RWO k8s-sc 3m47s
rocketchat-rocketchat Bound pvc-74a9da4b-c9f4-44de-8751-e3f1cf40dd89 8Gi RWO k8s-sc 3m47s
acaussignac@admcli:~$ kubectl annotate pod -n rocketchat --selector=release=rocketchat,app=mongodb backup.velero.io/backup-volumes=datadir --overwrite
```
> La dernière commande permet de labelliser les Persistent Volumes (PV) afin que ces derniers soient inclus dans la sauvegarde Velero.

---

---

Installer l'outil Velero afin de sauvegarder et restaurer notre application
---
- [ ] Installer un stockage de type objet nommé Minio qui permettra de stocker les backups créés par Velero
```
acaussignac@admcli:~$ cat minio.yaml
image:
tag: latest
accessKey: minio
secretKey: minio123
service:
type: LoadBalancer
defaultBucket:
enabled: true
name: velero
persistence:
size: 5G
acaussignac@admcli:~$ kubectl create ns velero
namespace/velero created
acaussignac@admcli:~$ helm install minio stable/minio --namespace velero -f minio.yaml
acaussignac@admcli:~$ kubectl get svc --namespace velero -l app=minio
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio LoadBalancer 10.100.15.44 172.20.6.102 9000:30378/TCP 6m8s
```

- [ ] Installer Velero server
```
acaussignac@admcli:~$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.3.1/velero-v1.3.1-linux-amd64.tar.gz
acaussignac@admcli:~$ tar zxvf velero-v1.3.1-linux-amd64.tar.gz
acaussignac@admcli:~$ sudo mv velero-v1.3.1-linux-amd64/velero /usr/local/bin/velero && sudo chown root:root /usr/local/bin/velero
acaussignac@admcli:~$ cat credentials-velero
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
acaussignac@admcli:~$ velero install --use-restic --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket velero --secret-file ./credentials-velero --use-volume-snapshots=false --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000,publicUrl=http://172.20.6.102:9000
```
- [ ] Installer Velero CLI sur votre poste de travail (MacOS dans ce tutoriel)
```
acaussignac@tokyo ~ % brew install kubectl
acaussignac@tokyo ~ % mkdir .kube
acaussignac@tokyo ~ % vi .kube/config
acaussignac@tokyo ~ % kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 3h13m v1.18.0
master02 Ready master 3h9m v1.18.0
master03 Ready master 3h7m v1.18.0
worker01 Ready <none> 3h6m v1.18.0
worker02 Ready <none> 3h5m v1.18.0
worker03 Ready <none> 3h4m v1.18.0
acaussignac@tokyo ~ % brew install velero
````
- [ ] Lancer un backup de l'application
```
acaussignac@tokyo ~ % velero backup create before-disaster --include-namespaces rocketchat
Backup request "before-disaster" submitted successfully.
Run `velero backup describe before-disaster` or `velero backup logs before-disaster` for more details.
acaussignac@tokyo ~ % velero backup describe before-disaster
Name: before-disaster
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: <none>
Phase: Completed
Namespaces:
Included: rocketchat
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-03-28 20:49:41 +0100 CET
Completed: 2020-03-28 20:50:02 +0100 CET
Expiration: 2020-04-27 21:49:41 +0200 CEST
Persistent Volumes: <none included>
Restic Backups (specify --details for more information):
Completed: 1
acaussignac@tokyo ~ % velero get backups
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
before-disaster Completed 2020-03-28 20:49:41 +0100 CET 29d default <none>
```

- [ ] Supprimer entièrement l'application et la restaurer avec Velero
```
acaussignac@admcli:~$ kubectl delete ns rocketchat
namespace "rocketchat" deleted
```

```
acaussignac@tokyo ~ % velero restore create --from-backup before-disaster --include-namespaces rocketchat
Restore request "before-disaster-20200328205554" submitted successfully.
Run `velero restore describe before-disaster-20200328205554` or `velero restore logs before-disaster-20200328205554` for more details.
acaussignac@tokyo ~ % velero restore describe before-disaster-20200328205554
Name: before-disaster-20200328205554
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Completed
Backup: before-disaster
Namespaces:
Included: rocketchat
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Restic Restores (specify --details for more information):
Completed: 1
```

Conclusion
---
Dans ce tutoriel, nous avons déployé un cluster K8s (à partir de zéro) sur une plateforme vSphere en configurant les interfaces CRI, CNI et CSI. Pour valider notre cluster, nous avons déployé deux applications: l'une sans persistance de données et exposée via une règle d'Ingress et l'autre avec persistance de données et exposée via un service de type Load Balancer. Cette méthode est une manière assez rapide de disposer d'un cluster K8s en vue d'améliorer vos compétences sur le sujet et d'aborder les problèmatiques des interfaces mais n'est pas forcément la bonne façon de disposer d'un environnement en production. C'est particulièrement vrai pour les opérations Day1 (Cluster K8s As A Service) et Day2 (scalabilité, auto réparation, mises à jour, ...). A ce titre, je vous propose ce second tutoriel sur Cluster API.
En synthèse
---
J'ai essayé de dresser les points positifs et négatifs de cette solution:
* **Points forts**: appréhender les concepts de K8s, disposer de clusters multi masters, bénéficier de la persistance des données pour les applications qui le nécessitent.
* **Points faibles**: temps de mise à disposition (mise à l'échelle), opérations Day2, pas de support d'un éditeur, être contraint d'installer et de maintenir plusieurs produits pour disposer d'une stack réseau complète.