# Kubernetes cluster creation from scratch
###### tags: `Kubernetes` `Doc` `Tutorial`
## Prerequisites
You will need:
- [ ] kops (current version v1.16.0)
- [ ] kubectl (current version v1.17.3)
- [ ] A route53 domain registered (for our example: reminiz.io)
- [ ] Have Admin access to the AWS console (to simplify the process)
## Configuration and creation
### Choose a domain
Choose a domain for your cluster: it could be anything as far as you can create it with route53.
For our example: `development.cluster.reminiz.io`.
### Create a hosted zone
Create a hosted zone for your domain with route53.
You can either create it with the cli, or the web console.
- CLI: `aws route53 create-hosted-zone --name development.cluster.reminiz.io --caller-reference 1`
- Web console: go to route53, create a new hosted zone.
Once it's created, you will need to copy the NS records of this new hosted zone into your root domain (for this example: `reminiz.io`):
- Copy the NS records (usually there are 4 lines) of `development.cluster.reminiz.io`
- Go to the hosted zone `reminiz.io`, and create a NS recordset, paste the NS records.
To validate this step, you can check with a `dig` command:
`dig NS development.cluster.reminiz.io`
You should see the NS records you've copy-pasted, in the output response.
### Bucket creation
To store the clusters configuration files, we will create a dedicated bucket.
Choose the name wisely. For our example, as we will (surely) have several clusters linked to `reminiz.io`, our bucket will be named `clusters.remiz.io`
You can create the bucket either with the CLI or the web console:
- CLI: `aws s3 mb s3://clusters.reminiz.io`
- Web console: go to S3, create the bucket with the default params (private).
Once the bucket is created, export the cluster name:
`export KOPS_STATE_STORE=s3://clusters.reminiz.io`
`kops` needs this environment variables to store the cluster configuration.
**Hint**: you can export this environment variable in your bashrc / zshrc.
### Cluster creation
In this section, we will need kops to create the cluster.
Execute the following command:
```
kops create cluster \
--zones=eu-central-1a,eu-central-1b,eu-central-1c \
development.cluster.reminiz.io
```
For this example, the only parameters we are using is `zones`, in which we define all the zones we want our cluster to be in (here, we will deploy the cluster in `eu-central-1 (Frankfurt)`, in all the availability zones.`)
This command will:
- create the configuration files, and store them in the bucket we have created before.
**But** this will not create the resources in AWS.
To create the resources:
```kops update cluster development.cluster.reminiz.io --yes```
This will take some minutes ... be patient !
Let's check our cluster nodes: `kubectl get nodes`
You should see a master node, and two nodes. Our cluster is ready to use !
Response example:
```
NAME STATUS ROLES AGE VERSION
ip-172-20-108-26.eu-central-1.compute.internal Ready node 2m35s v1.16.7
ip-172-20-41-167.eu-central-1.compute.internal Ready master 4m19s v1.16.7
ip-172-20-88-44.eu-central-1.compute.internal Ready node 2m34s v1.16.7
```
### Kubernetes addons
To install kubernetes add-ons, we will use Helm as a "package manager" for k8s.
#### Installing Helm
Follow this tutorial to install [Helm](https://helm.sh/docs/intro/install/)
Once Helm is installed, we can add/update/delete deployments, called "Charts" in the Helm ecosystem. Go check the documentation to learn more about this tool.
In addition to the official repository, we will need to add a repository:
```helm repo add bitnami https://charts.bitnami.com/bitnami```
(This is needed for external-dns for example)
#### Monitoring via Prometheus + Graphana
To monitor your cluster, you need two products:
- [Prometheus](https://github.com/giantswarm/prometheus) to collect info about your cluster.
- [Grafana](https://github.com/grafana/grafana) to monitor prometheus info via a dashboard.
##### Prometheus install
To install, run:
```
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
```
Then run:
```
helm install prometheus stable/prometheus-operator --set prometheusOperator.createCustomResource=false
```
To delete prometheus, run:
```
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
helm remove prometheus
```
##### Grafana install
Simply run `helm install grafana stable/grafana`
You should see something like:
```
NAME: grafana
LAST DEPLOYED: Thu Mar 12 16:49:35 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
grafana.default.svc.cluster.local
Get the Grafana URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 3000
3. Login with the password from step 1 and the username: admin
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
###### the Grafana pod is terminated. #####
#################################################################################
```
Do the following to access the grafana admin page.
##### External-dns
`helm install external-dns bitnami/external-dns`
`helm install nginx-ingress stable/nginx-ingress`
To be continued ...