# Setup Cloud Platform Steps
[TOC]
## Client-side Installations
```sh=
brew install kubernetes-helm kubectl ansible curl
# On rancher VM, we need to install docker
apt update
apt install docker curl
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
helm ls
helm repo add stable https://kubernetes-charts.storage.googleapis.com
```
## Running Playbook to Install Kubernetes Cluster on On-prem VMs
### Kubespray Prereq --> Run the following steps on all NODES.
```shell=zsh
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
# lsmod | grep br_netfilter. To load it explicitly call
modprobe br_netfilter
# ensure legacy binaries are installed
sudo apt update
sudo apt-get install -y iptables arptables ebtables
# switch to legacy versions
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo update-alternatives --set arptables /usr/sbin/arptables-legacy
sudo update-alternatives --set ebtables /usr/sbin/ebtables-legac
swapoff -a
kubeadm reset -f
systemctl daemon-reload
systemctl restart kubelet
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
rm -rf /etc/systemd/system/etcd.service /etc/etcd.env /etc/ssl/etcd /var/lib/etcd /var/lib/cni/ /var/lib/kubelet
systemctl daemon-reload
systemctl restart kubelet
ipvsadm --clear
```
## Trouble Shooting etcd
- Getting the following Error:
```Error: client: etcd cluster is unavailable or misconfigured; error #0: remote error: tls: bad certificate
; error #1: remote error: tls: bad certificate
; error #2: dial tcp xx.xx.xx.xx:2379: connect: connection refused
```
#### Resolution: Run on the Node which is giving Error while running Ansible
```sh
/usr/local/bin/etcdctl --endpoints=https://xx.xx.xx.xx:2379,https://xx.xx.xx.xx:2379,https://xx.xx.xx.xx:2379 cluster-health | grep -q 'cluster is healthy'
/usr/local/bin/etcdctl --endpoints=https://xx.xx.xx.xx:2379,https://xx.xx.xx.xx:2379,https://xx.xx.xx.xx:2379 --key-file /etc/ssl/etcd/ssl/admin-node1-key.pem --cert-file /etc/ssl/etcd/ssl/admin-node1.pem cluster-health
netstat -anvtpl | grep 2379
docker ps
systemctl status etcd
systemctl enable docker
systemctl enable etcd
systemctl restart docker
docker ps
```
## Running kubespray
```sh=
# Install python/pip using apt/yum/brew/chocolatey.
sudo pip install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
vim inventory/mycluster/group_vars/all/all.yml
vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
# Make sure we are bale to ssh to the IP;s
# SSH should not need a password. In case you have to put password while ssh, use command ssh-cp=opy-id <user@hostname>
# Run the command
cd infra/playbook/
ansible-playbook \
-i inventory/mycluster/cluster3.yml \
--become \
--become-user=root cluster.yml
```
## Enable Scheduling on master
- In case we want to enable scheduling on master (Dont do it generally)
- Doing this might increase the load on master, also reliability of entire cluster might go for a toss
```shell=
kubectl taint \
node mymasternode \
node-role.kubernetes.io/master:NoSchedule-
```
## Rancher (Optional -TBD)
```sh=
# Run rancher container on some VM (Not in Kubernetes VM's)
sudo docker run -d --restart=unless-stopped -p 0.0.0.0:80:80 -p 0.0.0.0:443:443 rancher/rancher
# Create An account with Rancher UI and then run following
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [USER_ACCOUNT]
# On Rancher UI, you will get a command similar to below.
kubectl apply -f https://localhost/v3/import/p5qcw5q547vs6lp5mwvmq4vq2n49zjlpwhz28mkv7wtnhg57whtnq4.yaml
# If above command fails due to ssl issue, use below command
curl --insecure -sfL https://localhost/v3/import/p5qcw5q547vs6lp5mwvmq4vq2n49zjlpwhz28mkv7wtnhg57whtnq4.yaml | kubectl apply -f -
```
## Installing Helm [This steps require with HELM 2]
```sh=
kubectl -n kube-system create serviceaccount tiller --[NoT required, this is related to helm2]
kubectl create clusterrolebinding tiller \ --[NoT required, this is related to helm2]
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm2 init --service-account tiller --[NoT required, this is related to helm2]
# In Kubernetes Versions> 1.16 Helm/Tiller needs to be updated with below command (use gnu-sed in mac, as \n will not work in mac sed)
# Below brew install and alias command is needed Only in Mac
(brew install gsed
alias sed=gsed)
helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -
# Wait for tiller to be up before proceeding
kubectl wait deploy/tiller-deploy --for=condition=Available --timeout=300s -n kube-system
# Proceed if you get a 0 exit code and below output
# `deployment.apps/tiller-deploy condition met`
# Check whether helm is able to talk to tiller
helm ls
```
## Deploy Ingress (In case of cloud provider, this will consume Only Single LoadBalancer Pricing)
```sh=
kubectl create ns ingress
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install ingress nginx-stable/nginx-ingress --namespace ingress --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=NodePort
# Since above comand didnt worked, we changed values.yaml manually
helm show values nginx-stable/nginx-ingress > values.yaml
# Edit these 3 values in file
# rbac.create=true
# controller.kind=DaemonSet
# controller.service.type=NodePort
helm install ingress nginx-stable/nginx-ingress --namespace ingress -f ./values.yaml
# To check whether pods are coming up
# kubectl -n ingress get po -w
```
## Deploy Vault
```shell=
cat > helm-vault-raft-values.yml <<EOF
server:
affinity: ""
dataStorage:
enabled: true
size: 3Gi
ha:
enabled: true
raft:
enabled: true
ui:
enabled: true
serviceType: "NodePort"
EOF
helm install vault hashicorp/vault --values helm-vault-raft-values.yml --namespace vault
# to initialize vault, do it in all 3 nodes of vault
kubectl -n vault exec -it vault-0 -- sh
vault status
vault operator init
vault operator unseal
# expose vault UI as ingress
cat > vault-ingress.yml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vault-ingress
namespace: vault
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /ui/$1
spec:
rules:
- http:
paths:
- path: /ui/(.+)
pathType: Prefix
backend:
service:
name: vault-ui
port:
number: 8200
EOF
cat > vault-ingress-host.yml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vault-ingress
namespace: vault
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: 192.168.0.36
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: vault-ui
port:
number: 8200
EOF
kubectl apply -f vault-ingress-host.yml
```
## Cert Manager for managing TLS Certificates (Only if we have Domain Name)
```sh=
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.3/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v0.14.3
# Validate Install
kubectl get pods --namespace cert-manager
cat <<EOF > clusterissuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-production
namespace: cert-manager
spec:
acme:
email: anuj.iitbhu@gmail.com
privateKeySecretRef:
name: letsencrypt-production
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx
EOF
cd infra/manifests/cert-manager
kubectl apply -f clusterissuer.yaml
kubectl -n cert-manager get secret
kubectl -n cert-manager get secret <pod_name>
```
## Heapster
- Heapster has to be instaled before installing dashboard.
```sh=
helm install heapster --namespace kube-system stable/heapster
# To Validate if pod is running
kubectl get po -n kube-system
```
## Kubernetes dashboard
```sh=
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
# Self Signed TLS through Ingress (if we dont have cert manager)
KEY_FILE=tls-cert.key
CERT_FILE=tls.cert
HOST=platform.staging.modularity.in
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}
```
## Kubeapps
```sh=
helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create ns kubeapps
kubectl create serviceaccount kubeapps-operator
# To Delete existing Cluster binding
kubectl delete clusterrolebinding kubeapps-operator
# Create new cluster binding
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator
helm install kubeapps --namespace kubeapps bitnami/kubeapps --set useHelm3=true
# To Validate if pods are running
kubectl get po -n kubeapps
# To Check the logs of the pod
kubectl logs -f -n kubeapps <POD Name>
#Get the Kubeapps URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kubeapps -l "app=kubeapps" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward --namespace kubeapps $POD_NAME 8080:8080
```
### To connect kubeapps, use secret
```sh=
kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{range .secrets[*]}{.name}{"\n"}{end}' | grep kubeapps-operator-token) -o jsonpath='{.data.token}' -o go-template='{{.data.token | base64decode}}' && echo
```
## Volumes
### Local Volume Provisioner
```sh=
# Run below commands on all hosts
mkdir /mnt/disks
for vol in vol1 vol2 vol3 vol4 vol5 vol6 vol7 vol8 vol9; do
mkdir /mnt/disks/$vol
mount -t tmpfs $vol /mnt/disks/$vol
done
```
### Deploy Local Volume Provisioner
```sh=
cd infra/manifests
kubectl create ns local-volumes
kubectl apply -f baremetal-default-storage.yaml
```
## ELK
```sh=
cd infra/manifests/log/
kubectl apply -f .
# To validate POD are running:
kubectl get po -n kube-system
# To Delete PVC's [Persistant Volume Claim]:
kubectl delete pvc <pvc name> -n kube-system
# To Delete POD's:
kubectl delete po -n kube-system elasticsearch-logging-{0,1,2}
# To Change the replicas count for ELK
kubectl edit sts -n kube-system elasticsearch-logging
```
# Helm charts
## Jaeger Operator
# Jaeger Operator using Helm3
```sh=
# Add the Jaeger Tracing Helm repository
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm repo update
cd infra/manifests/
kubectl create ns monitoring
#Install Jaeger using Helm3
helm upgrade -i jaeger jaegertracing/jaeger -n monitoring
# Below command only needed when using Helm2
##kubectl apply -f tracing/jaeger-crd-simplest.yaml -n monitoring
# To validate POD are running:
kubectl get po -n monitoring
```
## Prometheus Operator
```sh=
helm install prom --namespace monitoring stable/prometheus-operator
# To get password of grafana
kubectl -n monitoring get secret prom-grafana -ojsonpath='{.data.admin-password}' | base64 -D
# While Re-installing deletion doesn't delets CRD's
kubectl delete crd alertmanagers.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com podmonitors.monitoring.coreos.com
```
## OpenEBS (In Private Cloud Provider, This will help getting Automatic Volume Provisioning)
- ISCSI service should be running
- [See Docs](https://docs.openebs.io/docs/next/prerequisites.html)
```sh=
kubectl create ns openebs
helm install openebs --namespace openebs stable/openebs
# Label nodes where we want to run openebs (Optional, by default everywhere)
kubectl label nodes <node-name> node=openebs
# Make most famous Storage class, a default storageclass.
kubectl patch storageclass openebs-jiva-default -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
`Note: By default, Jiva creates 3 replicas of PV for each PVC, If number of nodes are lesser than 3, then edit SC's replicas to 2.`
## Istio
### Create/Input secret for kiali
```sh=
# For bash User
# To Create username
KIALI_USERNAME=$(read -p 'Kiali Username: ' uval && echo -n $uval | base64)
# To Create Password
KIALI_PASSPHRASE=$(read -sp 'Kiali Passphrase: ' pval && echo -n $pval | base64)
# For zsh user
# To Create username
KIALI_USERNAME=$(read '?Kiali Username: ' uval && echo -n $uval | base64)
# example infra-admin
# To Create Password
KIALI_PASSPHRASE=$(read -s "?Kiali Passphrase: " pval && echo -n $pval | base64)
# InfraK*S
```
### Create secret
```sh=
NAMESPACE=istio-system
kubectl create namespace $NAMESPACE
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: $NAMESPACE
labels:
app: kiali
type: Opaque
data:
username: $KIALI_USERNAME
passphrase: $KIALI_PASSPHRASE
EOF
# Install istioctl (Automatically clones repo)
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.4.4
export PATH=$PWD/bin:$PATH
# To Install ISTIO in Cluster
istioctl manifest apply \
--set profile=demo \
--set values.kiali.enabled=true
# To Expose via ingress
cd infra/manifests/ingress/
kubectl apply -f ingress-istio.yaml
# If LoadBalancer not available (In local/on-prem cluster), then patch service to become on NodePort
# while running from the bash
kubectl patch service istio-ingressgateway \
-n istio-system \
-p '{"spec": {"type": "NodePort"}}'
#while running from the zsh
kubectl patch service istio-ingressgateway \
-n istio-system \
-p '{spec: {type: "NodePort"}}'
```
### Verify successful Installation
```sh=
istioctl manifest generate \
--set profile=demo \
--set values.kiali.enabled=true > $HOME/generated-manifest.yaml
istioctl verify-install -f $HOME/generated-manifest.yaml
```
### To get the full secret
```sh
kubectl -n istio-system get secret kiali -oyaml
```
### To get the password of kiali
```sh
kubectl -n istio-system get secret kiali -ojsonpath='{.data.passphrase}' | base64 -d
```
### Bookinfo App for Istio Demo
```sh=
NAMESPACE=istio-demo
kubectl create ns $NAMESPACE
kubectl label namespace $NAMESPACE istio-injection=enabled
# Install application
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n $NAMESPACE
# check whether its up and running
kubectl -n $NAMESPACE exec -it $(kubectl -n $NAMESPACE get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
# Getting Ingress host and port
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
# Deploy first app to test Gateway
kubectl apply -f samples/httpbin/httpbin.yaml -n $NAMESPACE
# Create Gateway
kubectl -n $NAMESPACE apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
EOF
# Configure Route
kubectl -n $NAMESPACE apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
route:
- destination:
port:
number: 8000
host: httpbin
EOF
# Access the URL
curl -I -HHost:httpbin.example.com http://$INGRESS_HOST:$INGRESS_PORT/status/200
# Access some non-exposed URL (Should get 404)
curl -I -HHost:httpbin.example.com http://$INGRESS_HOST:$INGRESS_PORT/headers
# Gateway
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml -n $NAMESPACE
# Confirm whether app is available (bash)
# curl -s http://$\{GATEWAY_URL\}/productpage | grep -o "<title>.*</title>"
# zsh users
curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
kubectl get destinationrules -o yaml
```
## Open kiali
```sh=
istioctl dashboard kiali
```
## Kubeless
### Install
```sh=
export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kubeless/releases/latest | grep tag_name | cut -d '"' -f 4)
kubectl create ns kubeless
kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml
```
### Validate
```sh=
kubectl get pods -n kubeless
kubectl get deployment -n kubeless
kubectl get customresourcedefinition
```
### Sample Function
---------------
```=
def hello(event, context):
print event
return event['data']
kubeless function deploy hello --runtime python2.7 \
--from-file test.py \
--handler test.hello
kubectl get functions
kubeless function ls
kubeless function call hello --data 'Hello world!'
kubectl proxy -p 8080 &
curl -L --data '{"Another": "Echo"}' \
--header "Content-Type:application/json" \
localhost:8080/api/v1/namespaces/default/services/hello:http-function-port/proxy/
```
### Kubeless with Kafka
```sh=
export OS=$(uname -s| tr '[:upper:]' '[:lower:]')
curl -OL https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless_$OS-amd64.zip && \
unzip kubeless_$OS-amd64.zip && \
sudo mv bundles/kubeless_$OS-amd64/kubeless /usr/local/bin/
export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kafka-trigger/releases/latest | grep tag_name | cut -d '"' -f 4)
kubectl create -f https://github.com/kubeless/kafka-trigger/releases/download/$RELEASE/kafka-zookeeper-$RELEASE.yaml
# Test whether its running or not
kubectl -n kubeless get statefulset
kubectl -n kubeless get svc
# Put below function in function test-kafka.py
def foobar(event, context):
print event['data']
return event['data']
kubeless function deploy test --runtime python2.7 \
--handler test.foobar \
--from-file test-kafka.py
# Cretae a trigger for the function
kubeless trigger kafka create test --function-selector created-by=kubeless,function=test --trigger-topic test-topic
kubeless topic create test-topic
kubeless topic publish --topic test-topic --data "Hello World!"
# Function will just print the data, on each topic publish
https://kubeless.io/docs/use-existing-kafka/
```
## Kubeless UI
```sh=
#Expose Kubeless UI via ingress
cd infra/manifests/ingress/
kubectl apply -f ingress-kubeless.yaml
kubectl create -f https://raw.githubusercontent.com/kubeless/kubeless-ui/master/k8s.yaml
```
### To Retrieve Token For kubeapps login
```sh=
kubectl get secret \
$(kubectl get serviceaccount kubeapps-operator \
-o jsonpath='{range .secrets[*]}{.name}{"\n"}{end}' \
| grep kubeapps-operator-token) \
-o jsonpath='{.data.token}' \
-o go-template='{{.data.token | base64decode}}' && echo
```
----------------
### Launch kubeapps Dashboard
```sh=
export POD_NAME=$(kubectl get pods -n kubeapps -l "app=kubeapps,release=kubeapps" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 in your browser to access the Kubeapps Dashboard"
kubectl port-forward -n kubeapps $POD_NAME 8080:8080
```
## My-SQL
```sh=
kubectl create ns databases
helm repo update
helm install mysql-operator --namespace databases stable/mysql-operator
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-agent
namespace: databases
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: mysql-agent
namespace: databases
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-agent
subjects:
- kind: ServiceAccount
name: mysql-agent
namespace: my-namespace
EOF
cat <<EOF | kubectl create -f -
apiVersion: mysql.oracle.com/v1alpha1
kind: Cluster
metadata:
name: my-app-db
namespace: databases
EOF
kubectl -n databases get mysqlclusters
SQLPWD=kubectl -n databases get secret mysql-root-user-secret -o jsonpath="{.data.password}" | base64 --decode
kubectl run mysql-client --image=mysql:5.7 -it --rm --restart=Never \
-- mysql -h mysql.databases -uroot -p$SQLPWD -e 'SELECT 1'
```
## MongoDB (StatefulSet of 1 Node)
```sh=
kubectl create ns database
helm install mongodb --namespace database bitnami/mongodb
#To uninstall mongodb
helm uninstall mongodb --namespace database
# Root password
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace database mongo -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
echo $MONGODB_ROOT_PASSWORD
# To connect to your database run the following command:
kubectl run --namespace database mongodb-client --rm --tty -i --restart='Never' --image bitnami/mongodb --command -- mongo admin --host mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
# To Check the Status of POD_NAME
kubectl get pod -n database
# To Check the Logs of POD_NAME
kubectl logs -f -n database <POD_NAME>
#To Delete the POD_NAME <POD_NAME>
# To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace databases svc/mongodb 27017:27017 &
mongo --host 127.0.0.1
```
## Minio (Equivalent to S3 and GCS, recomended for Private Cloud)
```sh=
brew install minio/stable/mc
helm install --name oss --namespace minio ./minio
```
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
oss-minio.minio.svc.cluster.local
To access Minio from localhost, run the below commands:
```sh=
export POD_NAME=$(kubectl get pods --namespace minio -l "release=oss" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward $POD_NAME 9000 --namespace minio
```
Read more about port forwarding [here](http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/)
You can now access [Minio server](http://localhost:9000). Follow the below steps to connect to Minio server with mc client:
```sh=
# Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide
mc config host add oss-minio-local http://localhost:9000 AKIAIOSFODNN7EXAMPLE wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY S3v4
mc ls oss-minio-local
```
[Alternately, you can use your browser or the Minio SDK to access the server](https://docs.minio.io/categories/17)
## Strimzi Kafka Operator
```sh=
kubectl create ns kafka
curl -L https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.15.0/strimzi-cluster-operator-0.15.0.yaml \
| sed 's/namespace: .*/namespace: kafka/' \
| kubectl apply -f - -n kafka
```
```sh=
kubectl apply -f https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.15.0/examples/kafka/kafka-persistent-single.yaml -n kafka
```
- Wait for operator to be up
```sh=
kubectl wait kafka/my-cluster \
--for=condition=Ready \
--timeout=300s -n kafka
```
- Create test-topic
```sh=
kubectl apply -f https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/master/examples/topic/kafka-topic.yaml
-n kafka
```
- Run Console Producer -- Send message
```sh=
kubectl -n kafka run kafka-producer -ti \
--image=strimzi/kafka:0.14.0-kafka-2.3.0 \
--rm=true --restart=Never \
-- bin/kafka-console-producer.sh \
--broker-list my-cluster-kafka-bootstrap:9092 \
--topic my-topic
```
- Run Console Consumer -- Recieve Message
```sh=
kubectl -n kafka run kafka-consumer -ti \
--image=strimzi/kafka:0.14.0-kafka-2.3.0 \
--rm=true \
--restart=Never \
-- bin/kafka-console-consumer.sh \
--bootstrap-server my-cluster-kafka-bootstrap:9092 \
--topic my-topic --from-beginning
```
## Install RM/AM & UI
```sh=
# To edit the mongodb password & host in resource-manager.yaml
cd infra/manifests/platform
vi resource-manager.yaml
cd infra/manifests/
kubectl create ns platform
kubectl label namespace platform istio-injection=enabled
kubectl apply -f platform/
kubectl -n platform get pod
kubectl -n platform logs -f <pod name>
```
## To re-deploy UI Image/latest Image
```sh=
kubectl create ns platform-console
cd infra/manifests/platform
vi ui-console.yaml
# Edit the Image version
kubectl apply -f ui-console.yaml
```
## Install Identity Manager KeyClock
- To edit the mongodb password & host in identity-manager.yaml
```sh
cd infra/manifests/platform
vi identity-manager.yaml
cd infra/manifests/platform/
kubectl create ns platform-identity
kubectl apply -f identity-manager.yaml
kubectl apply -f ingress-identity.yaml
kubectl -n platform get pod
kubectl -n platform logs -f <pod name>
```
## Staging is the domain Name.
- To replace the domain name in the files, run the below command with your domain name.
```sh=
sed -i 's/internal/staging/g' *
```
## Delete the Entire Cluster
```sh=
ansible-playbook -i inventory/mycluster/cluster3.yml --become --become-user=root reset.yml
```
# Extra Tricks and Learnings
## To Delete all pods from a Node (Cordon)
```sh=
kubectl get pods --all-namespaces \
-o jsonpath='{range .items[?(.spec.nodeName=="gke-cluster-1-b4c97d4d-node-psh2")]}{@.metadata.namespace} {.metadata.name} {end}' |\
xargs -n 2 | xargs -I % sh -c "kubectl delete pods --namespace=%"
pod "busybox-694uu" deleted
pod "fluentd-cloud-logging-gke-cluster-1-b4c97d4d-node-psh2" deleted
```
## Simples Kubernetes Custom Scheduler
```sh=
#!/bin/bash
SERVER='localhost:8001'
while true;
do
for PODNAME in $(kubectl --server $SERVER get pods -o json | jq '.items[] | select(.spec.schedulerName == "my-scheduler") | select(.spec.nodeName == null) | .metadata.name' | tr -d '"');
do
NODES=($(kubectl --server $SERVER get nodes -o json | jq '.items[].metadata.name' | tr -d '"'))
NUMNODES=${#NODES[@]}
CHOSEN=${NODES[$[$RANDOM % $NUMNODES]]}
curl --header "Content-Type:application/json" --request POST --data '{"apiVersion":"v1", "kind": "Binding", "metadata": {"name": "'$PODNAME'"}, "target": {"apiVersion": "v1", "kind" : "Node", "name": "'$CHOSEN'"}}' http://$SERVER/api/v1/namespaces/default/pods/$PODNAME/binding/
echo "Assigned $PODNAME to $CHOSEN"
done
sleep 1
done
```
## Create metric using file based exporter
```sh=
touch /tmp/metrics-temp
while true
for directory in (du --bytes --separate-dirs --threshold=100M /mnt)
echo $directory | read size path
echo "node_directory_size_bytes{path=\"$path\"} $size" \
>> /tmp/metrics-temp
end
mv /tmp/metrics-temp /tmp/metrics
sleep 300
end
```
## Job which seeds Grafana Dashboard
```sh=
for file in *-datasource.json ; do
if [ -e "$file" ] ; then
echo "importing $file" &&
curl --silent --fail --show-error \
--request POST http://admin:admin@grafana:3000/api/datasources \
--header "Content-Type: application/json" \
--data-binary "@$file" ;
echo "" ;
fi
done ;
for file in *-dashboard.json ; do
if [ -e "$file" ] ; then
echo "importing $file" &&
( echo '{"dashboard":'; \
cat "$file"; \
echo ',"overwrite":true,"inputs":[{"name":"DS_PROMETHEUS","type":"datasource","pluginId":"prometheus","value":"prometheus"}]}' ) \
| jq -c '.' \
| curl --silent --fail --show-error \
--request POST http://admin:admin@grafana:3000/api/dashboards/import \
--header "Content-Type: application/json" \
--data-binary "@-" ;
echo "" ;
fi
done
```
## Krew Plugins used
[ ] access-matrix
[x] auth-proxy
[x] bulk-action
[x] ca-cert
[ ] change-ns
[x] config-cleanup
[x] config-registry
[x] cssh
[ ] ctx
[x] custom-cols
[x] debug
[ ] debug-shell
[x] deprecations
[x] detached/krew
[ ] df-pv
[ ] doctor
[x] duck
[x] edit-status
[x] eksporter
[x] emit-event
[ ] evict-pod
[x] example
[ ] exec-as
[x] exec-cronjob
[x] fields
[x] flame
[x] fleet
[x] fuzzy
[x] gadget
[ ] get-all
[x] graph
[x] grep
[x] gs
[ ] hns
[x] images
[ ] ingress-nginx
[x] ipick
[ ] konfig
[x] kudo
[x] match-name
[x] modify-secret
[x] mtail
[x] neat
[x] net-forward
[x] node-admin
[ ] node-restart
[ ] node-shell
[x] ns
[x] open-svc
[x] operator
[x] pod-dive
[x] pod-logs
[x] pod-shell
[x] podevents
[x] popeye
[x] profefe
[x] prompt
[ ] prune-unused
[x] rbac-lookup
[x] rbac-view
[x] reap
[ ] resource-capacity
[ ] resource-snapshot (Dows not work. anymore)
[x] restart
[x] rm-standalone-pod
[x] rolesum
[x] roll
[x] service-tree
[x] sick-pods
[x] snap
[x] sniff
[x] sort-manifests
[x] split-yaml
[x] spy
[x] ssh-jump
[ ] sshd
[x] starboard
[x] status
[x] sudo
[x] tail
[x] tap
[x] tmux-exec
[x] topology
[x] trace
[x] tree
[x] unused-volumes
[ ] view-allocations
[x] view-cert
[ ] view-secret
[ ] view-utilization
[x] warp
[ ] who-can
[x] whoami