# Devx - Howdy K8s Gardener!!!! <img src="https://i.imgur.com/ZaIgBiK.png" width="50">
# Create your first Gardener Kubernetes cluster
1. Logon to the [Gardener Dashboard](https://dashboard.garden.canary.k8s.ondemand.com/login) and choose CREATE YOUR FIRST PROJECT.

2. Provide a project Name, and optionally a Description, and a Purpose, and choose CREATE.
**Note:** You will not be able to change the project Name later. The rest of the details are editable.
:bangbang: Since we are going to create only trial clusters it will not be charged. No need to input the cost center.

3. The result will be similar to this.

4. In the dashboard navigation on the left, choose CLUSTERS, and then choose the plus button.

**User Ids' 1-60 choose AWS** :red_circle:
**Users Ids' >60 choose GCP** :red_circle:
5. In the **Infrastructure** section choose **AWS/GCP** for IaaS provider.
In the **Cluster Details** section of the configuration screen, change the autogenerated **Cluster Name**.
Make sure that the **trial secret** selected in the **Infrastructure Details > Secret** dropdown is ```trial-secretbinding-aws``` and give minimum **worker nodes as 3**.
**AWS user1- user60**

**GCP > user60**

6. While the cluster is in creating state, you can go through the [gardener internal documentation](https://pages.github.tools.sap/kubernetes/gardener/docs/home/) and more about in https://gardener.cloud
## Accessing cluster and Toolset:
1. After few minutes K8s cluster should be in ready state as below

2. Click on the action Key Icon to get the Kubeconfig and copy to clipboard


3. Open the IDE environment given to you. If not provided get one from the mediator. Example https://user.devx.perfteam.sapcloud.io
4. Create a new file, paste the copied kubeconfig and save it.
5. Open new terminal window in the IDE. `Terminal --> New Terminal`

6. Run `export KUBECONFIG=/home/project/<FILENAME>`

7. Check if you are able to communicate with the kubernetes cluster. `kubectl cluster-info`

## Kubernetes Cluster :+1:
Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. To make use of this new model of deployment, applications need to be packaged in a way that decouples them from individual hosts: they need to be containerized. Containerized applications are more flexible and available than in past deployment models, where applications were installed directly onto specific machines as packages deeply integrated into the host. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way. Kubernetes is an open-source platform and is production-ready.

A Kubernetes cluster consists of two types of resources:
* The **Master** coordinates the cluster
* **Nodes** are the workers that run applications
> kubectl cluster-info
> kubectl get nodes
It will give you the cluster details and number of nodes in the cluster.
## Deploy Your App :+1:
Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to create and update instances of your application. Once you've created a Deployment, the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster.

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. **This provides a self-healing mechanism to address machine failure or maintenance.**
> ```
> kubectl create deployment pingme --image=kesavanking/k8s-devx:1.0
>```
> kubectl get deployments
## Explore Your App :+1:
When you created a Deployment, Kubernetes created a **Pod** to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers and some shared resources for those containers (Storage volumes,Unique Cluster IP address)

A Pod always runs on a **Node**. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster.Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master's automatic scheduling takes into account the available resources on each Node.

You will use kubectl command line interface to get information about deployed applications and thier environments
* **kubectl get** - list resources
> ```
> kubectl get pods
> ```
* **kubectl describe** - show detailed information about a resource
> ```
> kubectl describe pod <POD NAME>
> ```
* **kubectl logs** - print the logs from a container in a pod
> ```
> kubectl logs <POD NAME> -c k8s-devx -f
> ```
* **kubectl exec** - execute a command on a container in a pod
> ```
> kubectl exec -it <POD NAME> -- /bin/sh
> ```
> ```
> ps -a
> ```
You can see the hello application running....
> ```
> exit
> ```
We have now opened console on the container.You can check the binary *hello-app* of the go application.
## Self Healing
Le'ts see the self healing of deployments. We will simulate application crash by simply deleting the pod and see how kubernetes brings a new pod up since it is a deployment.
```
kubectl delete pod <POD NAME>
```
You should see new pod coming up immediately.
## Expose Your App :+1:
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster.A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
* ***ClusterIP*** (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
* ***NodePort*** - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
* ***LoadBalancer*** - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
* ***ExternalName*** - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.

You create a new service and expose it to external traffic we’ll use the expose command with LoadBalancer as parameter
> ```
> kubectl expose deployment pingme --type=LoadBalancer --port 80 --target-port 8080
> ```
> kubectl get services
> ```
The External IP will be in pending state.Wait for a while till your application get a specific External IP.Once you've determined the external IP address for your application, copy the IP address. Point your browser to External IP URL (such as http://203.0.113.0) to check if your application is accessible.
## Scale Your App :+1:
In the previous modules we created a Deployment, and then exposed it publicly via a Service. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.
Scaling is accomplished by changing the number of replicas in a Deployment

> ```
> kubectl scale deployment pingme --replicas=3
> ```
> ```
> kubectl get pods
> ```
Now, you have multiple instances of your application running independently of each other.The load balancer you provisioned in the previous step will start routing traffic to these new replicas automatically.
## Update Your App
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. **Rolling updates** allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.

To update the image of the application to version 2, use the **set image **command, followed by the deployment name and the new image version:
> ```
> kubectl set image deployment/pingme k8s-devx=kesavanking/k8s-devx:2.0
> ```
> ```
> kubectl get pods -l app=pingme
> ```
You can see new containers will be creating with new image and you can access the service as we did in above module and check the version change in the web output
When you access the service output will be similar to this.
```
Hello SAP!
Version: 2.0.0
Hostname: pingme-76975fc9d6-bdm47
```
*Happy Kubernetes* :tada:
## Delete Your Resources
---
Once you completed your exercise please delete your resources for other participant's convenience..
> ```
> kubectl delete deployment pingme
> ```
> ```
> kubectl delete service pingme
> ```
>
---
# Use Gardener Features for applications.
## Enough Hello World !!
We will deploy [minio](https://min.io/), a native kubernetes object storage.
1. Create `minio` namespace
```kubectl create namespace minio```
3. Install the minio resources.
```kubectl apply -f minio/minio-original.yaml -n minio```
4. Check if minio pod is running successfully.
```kubectl get pods -n minio```
5. To see all resources which are created in the namespace.
```kubectl get all -n minio```
## Expose minio via ingress
Now let's expose the minio object store through [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/). Gardener by default provides an add-on for ingress controller which can be used for dev purposes.
Access the [Gardener dashboard](https://dashboard.garden.canary.k8s.ondemand.com/). Go to the project, click on the cluster which you created. Under `Overview Page`, go to **add-on section** and enable nginx ingress. Wait till the cluster gets reconciled and turns green.

In the same overview page, under Infrastructure section you can find your **ingress domain** . It will be in the format of ```*.ingress.<shootname>.<project>.shoot.canary.k8s-hana.ondemand.com```
:bangbang::bangbang: While copying ignore ***.** for ingress domain
**Example:**
INGRESS_DOMAIN = `ingress.<SHOOT>.<PROJECT>.shoot.canary.k8s-hana.ondemand.com`
Copy the below ingress object file and replace **INGRESS_DOMAIN** and save it.
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
cert.gardener.cloud/purpose: managed
spec:
tls:
# Must not exceed 64 characters.
- hosts:
- "minio.INGRESS_DOMAIN"
secretName: cert-tls
rules:
- host: minio.INGRESS_DOMAIN
http:
paths:
- backend:
service:
name: minio
port:
number: 9000
path: /
pathType: ImplementationSpecific
```
Save the file, then do `kubectl apply -f <ingress file> -n minio`
## Request X.509 Certificates
You might have noticed an annotation ` cert.gardener.cloud/purpose: managed` in the ingress object which we created. Gardener let’s you request a commonly trusted X.509 certificate for your application endpoint. For more details you can [check here](https://gardener.cloud/050-tutorials/content/howto/x509_certificates/)
TLS certificate will be stored as a secret. Check the status of the certificate creation
`kubectl describe ing minio -n minio`

kubectl get ingress -n minio
You could see the ingress url. Now access the ingress url in your browser.

Get Access key and secret key to login to minio:
**ACCESS KEY:**
```kubectl get secret minio -n minio -o jsonpath="{.data.accesskey}" | base64 -d```
**SECRET KEY:**
```kubectl get secret minio -n minio -o jsonpath="{.data.secretkey}" | base64 -d```
Create a bucket and upload some files to the minio store.

# Volume Snapshot and Restore
[Volume snapshots](https://kubernetes.io/docs/concepts/storage/volume-snapshots) provide Kubernetes users with a standardized way to copy a volume's contents at a particular point in time without creating an entirely new volume. This functionality enables, for example, database administrators to backup databases before performing edit or delete modifications.
This section helps to gain the knowledge of taking volume snapshots of persistant volume claims of your applications and how to restore them in case of any disruptions of your applications's data or even to clone the data into a completely new volume.
Let's use the same volume which we provisioned while installing minio.
:writing_hand::exclamation: Users are requested to upload some junk files to the objectstore so that we can see the real use of volume snapshot and restore.

### Volume Snapshot
1. Get the persistent volume claim name. You would see a persitent volume claim bound to the minio pod.
```kubectl get pvc -n minio```

2. Create a snapshot of the claim with the following manifest. Create a file with the following content and save it.
```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: minio-snapshot
spec:
volumeSnapshotClassName: default
source:
persistentVolumeClaimName: minio
```
Save the file and do,
```kubectl apply -f <snapshot manifest> -n minio```
3. Wait till the volumesnapshot becomes ready to use.
```kubectl get volumesnapshot minio-snapshot -n minio```

4. Now we are safe to delete the minio deployment and we will see the restoration from the snapshot.
```kubectl delete -f minio/minio-original.yaml -n minio```
### Restore Snapshot
1. Now we will create a new persistent volume claim by restoring from the volume snapshot which we took earlier.
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-restore
spec:
dataSource:
name: minio-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
```
Save the file and do ```kubectl apply -f <resstore_pvc file> -n minio```
2. Deploy the minio/minio-restore.yaml file.
```kubectl apply -f minio/minio-restore.yaml -n minio```
3. Wait till the pod is ready.
```kubectl wait --for=condition=Ready pod -l app=minio -n minio```
4. Access the same **ingress** url which you deployed before. You should see the same buckets and files being restored.:clap: :clap:
--------------------
:seedling: HAPPY GARDENING....:seedling:
![]()<img src="https://i.imgur.com/ZaIgBiK.png" width="30"> https://gardener.cloud/
![]()<img src="https://i.imgur.com/3Oofwji.png" width="40">https://sap-cp.slack.com/archives/C9CEBQPGE