# Helm upgrade extensions/v1beta1 - Helm version: 3.2 - Kubernetes version: 1.15 - Chart repository: git@github.com:masmovil/charts.git - Chart path: base - Cluster used: mm-k8s-dev-01 - Namespace used: "test" - Branch with changes: task/devops-573-upgrade-api-version ## 1. Install base helm chart Install from `master` branch: ``` git checkout master helm3.2 install test-upgrade . --values values.test.yaml ``` Output: ``` NAME: test-upgrade LAST DEPLOYED: Mon May 4 12:28:10 2020 NAMESPACE: test STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Application deployed into namespace test Get the application running in the following URLs: - https://test.private.dev-01.k8s.masmovil.com You can check the pods running kubectl -n test get pods ``` ## 2. Upgrade to apps/v1 Preview changes: ``` git checkout task/devops-573-upgrade-api-version helm3.2 diff upgrade test-upgrade . --values values.test.yaml ``` Output: ``` test, base-nginx-test, Deployment (extensions) has been removed: - # Source: base/templates/deployment.yaml - apiVersion: extensions/v1beta1 - kind: Deployment - metadata: - name: base-nginx-test - labels: - app: base-nginx-test - fullapp: test - chart: base - release: test-upgrade - heritage: Helm - tier: backend - spec: - selector: - matchLabels: - app: base-nginx-test - fullapp: test - replicas: 1 - template: - metadata: - annotations: - - {} - labels: - app: base-nginx-test - fullapp: test - release: test-upgrade - tier: backend - spec: - affinity: - nodeAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - preference: - matchExpressions: - - key: k8s.masmovil.com/preemptible - operator: In - values: - - "true" - weight: 100 - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - base-nginx-test - topologyKey: kubernetes.io/hostname - weight: 100 - volumes: - - name: app-config - configMap: - name: base-nginx-test-config - - - containers: - - name: "base-nginx-test" - image: "nginx:1.17.8-alpine" - imagePullPolicy: IfNotPresent - command: - - - nginx - - -g - - daemon off; - env: - - name: CONFIG_PATH - value: /etc/nginx/conf.d - ports: - - containerPort: 8080 - name: http - protocol: TCP - - - livenessProbe: - - failureThreshold: 3 - httpGet: - httpHeaders: - - name: customer-session-id - value: k8s-liveness-probe - path: /health - port: 8080 - initialDelaySeconds: 30 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 3 - readinessProbe: - - failureThreshold: 1 - httpGet: - httpHeaders: - - name: customer-session-id - value: k8s-liveness-probe - path: /health - port: 8080 - initialDelaySeconds: 10 - periodSeconds: 5 - successThreshold: 1 - timeoutSeconds: 3 - volumeMounts: - - name: app-config - mountPath: /etc/nginx/conf.d - resources: - - limits: - cpu: 300m - memory: 150Mi - requests: - cpu: 20m - memory: 20Mi + test, base-nginx-test, Deployment (apps) has been added: - + # Source: base/templates/deployment.yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: base-nginx-test + labels: + app: base-nginx-test + fullapp: test + chart: base + release: test-upgrade + heritage: Helm + tier: backend + spec: + selector: + matchLabels: + app: base-nginx-test + fullapp: test + replicas: 1 + template: + metadata: + annotations: + + {} + labels: + app: base-nginx-test + fullapp: test + release: test-upgrade + tier: backend + spec: + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - preference: + matchExpressions: + - key: k8s.masmovil.com/preemptible + operator: In + values: + - "true" + weight: 100 + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - base-nginx-test + topologyKey: kubernetes.io/hostname + weight: 100 + volumes: + - name: app-config + configMap: + name: base-nginx-test-config + + + containers: + - name: "base-nginx-test" + image: "nginx:1.17.8-alpine" + imagePullPolicy: IfNotPresent + command: + + - nginx + - -g + - daemon off; + env: + - name: CONFIG_PATH + value: /etc/nginx/conf.d + ports: + - containerPort: 8080 + name: http + protocol: TCP + + + livenessProbe: + + failureThreshold: 3 + httpGet: + httpHeaders: + - name: customer-session-id + value: k8s-liveness-probe + path: /health + port: 8080 + initialDelaySeconds: 30 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 3 + readinessProbe: + + failureThreshold: 1 + httpGet: + httpHeaders: + - name: customer-session-id + value: k8s-liveness-probe + path: /health + port: 8080 + initialDelaySeconds: 10 + periodSeconds: 5 + successThreshold: 1 + timeoutSeconds: 3 + volumeMounts: + - name: app-config + mountPath: /etc/nginx/conf.d + resources: + + limits: + cpu: 300m + memory: 150Mi + requests: + cpu: 20m + memory: 20Mi ``` List releases: ``` helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION test-upgrade test 1 2020-05-04 12:28:10.529788 +0200 CEST deployed base-v1.3.0 ``` Apply changes: ``` helm3.2 upgrade test-upgrade . --values values.test.yaml ``` Output: ``` Release "test-upgrade" has been upgraded. Happy Helming! NAME: test-upgrade LAST DEPLOYED: Mon May 4 12:37:48 2020 NAMESPACE: test STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: Application deployed into namespace test Get the application running in the following URLs: - https://test.private.dev-01.k8s.masmovil.com You can check the pods running kubectl -n test get pods ``` List releases: ``` helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION test-upgrade test 2 2020-05-04 12:37:48.338596 +0200 CEST deployed base-v1.3.1 ``` **Note**: I have checked PODs and it did not delete the deployment and created it again, PODs are still up and running with no downtime. ## 3. Check depployment We check if deployment is being server in new `apps/v1` version: ``` k get deploy base-nginx-test -o yaml |grep apiVersion apiVersion: extensions/v1beta1 ``` It is still in `extensions/v1beta1`, we should check what happens in these 2 scenarios: 1. We keep installing helm releases upgrades. 2. We upgrade cluster to v1.16