# Prometheus + Grafana Installation
- with PV and PVC deployment
###### tags: `ITRI`, `K8s`, `Prometheus`, `Grafana`, install`
***
## Reference
- Github ([prometheus/prometheus](https://github.com/prometheus/prometheus))
***
## Introduction
Prometheus 是一套開放式原始碼的系統監控警報框架與TSDB(Time Series Database)。
***
## Architecture

***
## Prometheus Installation
### Install Pronetheus with PVs and PVCs.
Git clone from github.
```bash
$ git clone https://github.com/helm/charts.git
```
Promethrus-related Version in `Helm`
```
# helm search prometheus
NAME CHART VERSION APP VERSION DESCRIPTION
coreos/kube-prometheus 0.0.105 Manifests, dashboards, and alerting rules for end-to-end ...
coreos/prometheus 0.0.51 Prometheus instance created by the CoreOS Prometheus Oper...
coreos/prometheus-operator 0.0.29 0.20.0 Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus 8.14.0 2.11.1 Prometheus is a monitoring system and time series database.
stable/prometheus-adapter 1.2.0 v0.5.0 A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter 0.4.0 0.14.0 Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter 0.4.7 0.5.0 A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-consul-exporter 0.1.4 0.4.0 A Helm chart for the Prometheus Consul Exporter
stable/prometheus-couchdb-exporter 0.1.1 1.0 A Helm chart to export the metrics from couchdb in Promet...
stable/prometheus-mongodb-exporter 2.1.0 v0.7.0 A Prometheus exporter for MongoDB metrics
stable/prometheus-mysql-exporter 0.5.0 v0.11.0 A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/prometheus-nats-exporter 2.1.0 0.4.0 A Helm chart for prometheus-nats-exporter
stable/prometheus-node-exporter 1.5.1 0.18.0 A Helm chart for prometheus node-exporter
stable/prometheus-operator 5.14.1 0.31.1 Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus-postgres-exporter 0.6.3 0.4.7 A Helm chart for prometheus postgres-exporter
stable/prometheus-pushgateway 0.4.1 0.8.0 A Helm chart for prometheus pushgateway
stable/prometheus-rabbitmq-exporter 0.5.1 v0.29.0 Rabbitmq metrics exporter for prometheus
stable/prometheus-redis-exporter 2.0.2 0.28.0 Prometheus exporter for Redis metrics
stable/prometheus-snmp-exporter 0.0.4 0.14.0 Prometheus SNMP Exporter
stable/prometheus-to-sd 0.1.1 0.2.2 Scrape metrics stored in prometheus format and push them ...
coreos/alertmanager 0.1.7 0.15.1 Alertmanager instance created by the CoreOS Prometheus Op...
coreos/grafana 0.0.37 Grafana instance for kube-prometheus
stable/elasticsearch-exporter 1.5.0 1.0.2 Elasticsearch stats exporter for Prometheus
stable/helm-exporter 0.3.0 0.4.0 Exports helm release stats to prometheus
stable/karma 1.1.16 v0.38 A Helm chart for Karma - an UI for Prometheus Alertmanager
stable/stackdriver-exporter 1.1.0 0.6.0 Stackdriver exporter for Prometheus
stable/weave-cloud 0.3.3 1.3.0 Weave Cloud is a add-on to Kubernetes which provides Cont...
coreos/exporter-node 0.4.6 0.14.0 A Helm chart for Kubernetes node exporter
stable/kube-state-metrics 1.6.5 1.6.0 Install kube-state-metrics to generate and expose cluster...
stable/kuberhealthy 1.2.5 v1.0.2 The official Helm chart for Kuberhealthy.
stable/mariadb 6.5.4 10.3.16 Fast, reliable, scalable, and easy to use open-source rel...
```
Find the `/home/itri/charts/stable/prometheus/value.yaml` file.
```
# cd /home/itri/charts/stable/prometheus
# vim values.yaml
```
Modify the `/home/itri/charts/stable/prometheus/values.yaml` file with `persistentVolume` setting:
1. `alertmanager` enable
2. `pushgateway` enable
3. `server` enable
Make `alertmanager.persistentVolume.enabled=false`.
```=
persistentVolume:
## If true, alertmanager will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: true
#enabled: false
## alertmanager data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
- ReadWriteMany
## alertmanager data Persistent Volume Claim annotations
##
annotations: {}
## alertmanager data Persistent Volume existing claim name
## Requires alertmanager.persistentVolume.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim: "alertmanager-pvc"
## alertmanager data Persistent Volume mount root path
##
mountPath: /data/prometheus/alertmanager
## alertmanager data Persistent Volume size
##
size: 2Gi
## alertmanager data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
storageClass: "alertmanager"
## Subdirectory of alertmanager data Persistent Volume to mount
## Useful if the volume's root directory is not empty
##
subPath: ""
```
Make `pushgateway.persistentVolume.enabled=true`.
```=
persistentVolume:
## If true, pushgateway will create/use a Persistent Volume Claim
## If false, use emptyDir
##
#enabled: false
enabled: true
## pushgateway data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
- ReadWriteMany
## pushgateway data Persistent Volume Claim annotations
##
annotations: {}
## pushgateway data Persistent Volume existing claim name
## Requires pushgateway.persistentVolume.enabled: true
## If defined, PVC must be created manually before volume will be bound
#existingClaim: ""
existingClaim: "pushgateway-pvc"
## pushgateway data Persistent Volume mount root path
##
mountPath: /data/prometheus/pushgateway
## pushgateway data Persistent Volume size
##
size: 2Gi
## alertmanager data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
storageClass: "pushgateway"
## Subdirectory of alertmanager data Persistent Volume to mount
## Useful if the volume's root directory is not empty
##
subPath: ""
```
Make `server.persistentVolume.enabled=true`.
```=
persistentVolume:
## If true, Prometheus server will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: true
#enabled: false
## Prometheus server data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
- ReadWriteMany
## Prometheus server data Persistent Volume annotations
##
annotations: {}
## Prometheus server data Persistent Volume existing claim name
## Requires server.persistentVolume.enabled: true
## If defined, PVC must be created manually before volume will be bound
existingClaim: "server-pvc"
## Prometheus server data Persistent Volume mount root path
##
mountPath: /data/prometheus/server
## Prometheus server data Persistent Volume size
##
size: 8Gi
## Prometheus server data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
storageClass: "server"
## Subdirectory of Prometheus server data Persistent Volume to mount
## Useful if the volume's root directory is not empty
##
subPath: ""
```
參考了 `charts` 所提供的設定後,需要手動建立 PV and PVC。

先建立 `pushgateway` 需要 mount 的資料夾。
```
# mkdir /data/prometheus/alertmanager
# mkdir /data/prometheus/pushgateway
# mkdir /data/prometheus/server
```
Create altermanager PV file: `# vim alertmanager-pv.yaml`.
```=
kind: PersistentVolume
apiVersion: v1
metadata:
name: alertmanager-pv
labels:
type: local
spec:
storageClassName: alertmanager
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/prometheus/alertmanager"
```
Create pushgateway PV file: `# vim pushgateway-pv.yaml`.
```=
kind: PersistentVolume
apiVersion: v1
metadata:
name: pushgateway-pv
labels:
type: local
spec:
storageClassName: pushgateway
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/prometheus/pushgateway"
```
Create server PV file: `# vim server-pv.yaml`.
```=
kind: PersistentVolume
apiVersion: v1
metadata:
name: server-pv
labels:
type: local
spec:
storageClassName: server
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/prometheus/server"
```
Create alertmanager PVC file: `# vim alertmanager-pvc.yaml`.
```=
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: alertmanager-pvc
spec:
storageClassName: alertmanager
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
```
Create pushgateway PVC file: `# vim pushgateway-pvc.yaml`.
```=
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pushgateway-pvc
spec:
storageClassName: pushgateway
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
```
Create server PVC file: `# vim server-pvc.yaml`.
```=
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: server-pvc
spec:
storageClassName: server
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
```
Create the PV and PVC volume.
```
// Create PV
# kubectl apply -f pushgateway-pv.yaml
persistentvolume/pushgateway-pv created
~# kubectl apply -f server-pv.yaml
persistentvolume/server-pv created
# kubectl apply -f alertmanager-pv.yaml
persistentvolume/alertmanager-pv created
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
alertmanager-pv 2Gi RWX Retain Available alertmanager 7s
pushgateway-pv 2Gi RWX Retain Available pushgateway 23s
server-pv 8Gi RWX Retain Available server 15s
// Create PVC
# kubectl apply -f pushgateway-pvc.yaml
persistentvolumeclaim/pushgateway-pvc created
# kubectl apply -f server-pvc.yaml
persistentvolumeclaim/server-pvc created
# kubectl apply -f alertmanager-pvc.yaml
persistentvolumeclaim/alertmanager-pvc created
~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
alertmanager-pvc Bound alertmanager-pv 2Gi RWX alertmanager 8s
pushgateway-pvc Bound pushgateway-pv 2Gi RWX pushgateway 27s
server-pvc Bound server-pv 8Gi RWX server 18s
```
### Prometheus Installation using `helm`

```
# helm install stable/prometheus --name ohmygod
NAME: ohmygod
LAST DEPLOYED: Wed Jul 10 10:22:35 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME AGE
ohmygod-prometheus-alertmanager 0s
ohmygod-prometheus-server 0s
==> v1/ServiceAccount
ohmygod-prometheus-alertmanager 0s
ohmygod-prometheus-kube-state-metrics 0s
ohmygod-prometheus-node-exporter 0s
ohmygod-prometheus-pushgateway 0s
ohmygod-prometheus-server 0s
==> v1beta1/ClusterRole
ohmygod-prometheus-kube-state-metrics 0s
ohmygod-prometheus-server 0s
==> v1beta1/ClusterRoleBinding
ohmygod-prometheus-kube-state-metrics 0s
ohmygod-prometheus-server 0s
==> v1/Service
ohmygod-prometheus-alertmanager 0s
ohmygod-prometheus-kube-state-metrics 0s
ohmygod-prometheus-node-exporter 0s
ohmygod-prometheus-pushgateway 0s
ohmygod-prometheus-server 0s
==> v1beta1/DaemonSet
ohmygod-prometheus-node-exporter 0s
==> v1beta1/Deployment
ohmygod-prometheus-alertmanager 0s
ohmygod-prometheus-kube-state-metrics 0s
ohmygod-prometheus-pushgateway 0s
ohmygod-prometheus-server 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
ohmygod-prometheus-node-exporter-djvt2 0/1 ContainerCreating 0 0s
ohmygod-prometheus-node-exporter-wk5sb 0/1 ContainerCreating 0 0s
ohmygod-prometheus-alertmanager-c6bcd684f-2d22t 0/2 Pending 0 0s
ohmygod-prometheus-kube-state-metrics-5b9d56c94-kj5m8 0/1 ContainerCreating 0 0s
ohmygod-prometheus-pushgateway-f96878884-snzlm 0/1 ContainerCreating 0 0s
ohmygod-prometheus-server-5c49bdb488-c4wsn 0/2 Pending 0 0s
NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
ohmygod-prometheus-server.default.svc.cluster.local
Get the Prometheus server URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
ohmygod-prometheus-alertmanager.default.svc.cluster.local
Get the Alertmanager URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9093
The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
ohmygod-prometheus-pushgateway.default.svc.cluster.local
Get the PushGateway URL by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9091
For more information on running Prometheus, visit:
https://prometheus.io/
```
Successfully run the Prometheus pods.
```
# kubectl get pod
NAME READY STATUS RESTARTS AGE
cmk-cluster-init-pod 0/1 Completed 0 5d20h
cmk-init-install-discover-pod-node1 0/2 Completed 0 5d20h
cmk-init-install-discover-pod-node2 0/2 Completed 0 5d20h
cmk-reconcile-nodereport-ds-node1-9z8tv 2/2 Running 0 5d20h
cmk-reconcile-nodereport-ds-node2-8x65s 2/2 Running 0 5d20h
cmk-webhook-deployment-57d9594bbb-bgkzs 1/1 Running 0 5d20h
node-feature-discovery-d9sc5 0/1 Completed 0 5d20h
node-feature-discovery-pfwcw 0/1 Completed 0 5d20h
ohmygod-prometheus-alertmanager-c6bcd684f-2d22t 2/2 Running 0 89s
ohmygod-prometheus-kube-state-metrics-5b9d56c94-kj5m8 1/1 Running 0 89s
ohmygod-prometheus-node-exporter-djvt2 1/1 Running 0 89s
ohmygod-prometheus-node-exporter-wk5sb 1/1 Running 0 89s
ohmygod-prometheus-pushgateway-f96878884-snzlm 1/1 Running 0 89s
ohmygod-prometheus-server-5c49bdb488-c4wsn 2/2 Running 0 89s
testpod1 1/1 Running 0 2d
```
- Can only list Prometheus pod: `# kubectl get pods --selector=app=prometheus`.
See the services of Prometheus.
```
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cmk-webhook-service ClusterIP 10.233.49.205 <none> 443/TCP 6d2h
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 6d2h
ohmygod-prometheus-alertmanager ClusterIP 10.233.52.194 <none> 80/TCP 5h42m
ohmygod-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 5h42m
ohmygod-prometheus-node-exporter ClusterIP None <none> 9100/TCP 5h42m
ohmygod-prometheus-pushgateway ClusterIP 10.233.55.66 <none> 9091/TCP 5h42m
ohmygod-prometheus-server ClusterIP 10.233.50.171 <none> 80/TCP 5h42m
```
Add the environment variables. `# vim ~/.bashrc`
```
export server_pod_name=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
export alertmanager_pod_name=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
```
Remember to source the environment variables.
```
# . ~/.bashrc
```
Open the kubernetes service file.
```
# kubectl edit svc ohmygod-prometheus-server
```
See the content.
```
# Please edit the object below. Lines beginning with a '#' will be # and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-07-10T02:22:36Z
labels:
app: prometheus
chart: prometheus-8.14.0
component: server
heritage: Tiller
release: ohmygod
name: ohmygod-prometheus-server
namespace: default
resourceVersion: "1442186"
selfLink: /api/v1/namespaces/default/services/ohmygod-prometheus-server
uid: 95035509-a2b9-11e9-ad65-0cc47aeb8c54
spec:
clusterIP: 10.233.50.171
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
component: server
release: ohmygod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
```
Found the `targetPort: 9090`.
Make `kubectl` do `port-forward`.
```
# kubectl --namespace default port-forward ohmygod-prometheus-server-5c49bdb488-c4wsn 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
```
Port forwarding from 127.0.0.1:9090 to external IP:8888.
- 預設之下,port-forward 只能綁到 127.0.0.1,這個 IP 下的流量不會到 Internet 之中
```
# sudo ssh -N -L 0.0.0.0:8888:localhost:9090 root@100.86.1.141
```
At the time, Use url: 100.86.1.141:8888 to see the webpage.

### Install Pronetheus without PVs and PVCs.
略過
***
***
## Prometheus Uninstallation
***
```bash
# helm del --purge ohmygod // 刪除所有 ohmygod 的 release
release "ohmygod" deleted
```
***
***
## Grafana Installation with PV and PVC.
***
### Intsallation using `helm`
Grafana Version in `Helm`.
```
# helm search grafana
NAME CHART VERSION APP VERSION DESCRIPTION
coreos/grafana 0.0.37 Grafana instance for kube-prometheus
stable/grafana 3.5.7 6.2.4 The leading tool for querying and visualizing time series...
```
## Some Failed Progress Record
- [Grafana Installation Problems Recording](https://hackmd.io/sF6GXsqlS4G4sBZnFJ8m7Q?view)
## Configuration in *Helm charts* File
編輯關鍵檔案:`# vim /home/itri/charts/stable/grafana/value.yaml`.
將 service type 設定為 `NodePort` 並指定 `nodePort=30300`.
```bash=
service:
type: NodePort
port: 80
targetPort: 3000
nodePort: 30300
# targetPort: 4181 To be used with a proxy extraContainer
annotations: {}
labels: {}
```
加入 PV and PVC 設定。
```bash=
persistence:
enabled: true
storageClassName: grafana
accessModes:
- ReadWriteMany
size: 10Gi
# annotations: {}
# subPath: ""
existingClaim: "grafana-pvc"
```
### Preparation before installation for PV and PVC
From the `/home/itri/charts/stable/grafana/value.yaml`, we know Grafana at default needs 10G volume.
First `# vim grafana-pv.yaml`.
```bash=
kind: PersistentVolume
apiVersion: v1
metadata:
name: grafana-pv
labels:
type: local
spec:
storageClassName: grafana
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/data/prometheus/grafana"
```
And `# vim grafana-pvc.yaml`.
```bash=
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana-pvc
spec:
storageClassName: grafana
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
```
`apply` the PV and PVC.
```bash
# kubectl apply -f grafana-pv.yaml
# kubectl apply -f grafana-pvc.yaml
```
Check bonding.
```bash
# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/alertmanager-pv 2Gi RWX Retain Bound default/alertmanager-pvc alertmanager 6d2h
persistentvolume/grafana-pv 10Gi RWX Retain Bound default/grafana-pvc grafana 20h
persistentvolume/pushgateway-pv 2Gi RWX Retain Bound default/pushgateway-pvc pushgateway 6d2h
persistentvolume/server-pv 8Gi RWX Retain Bound default/server-pvc server 6d2h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/alertmanager-pvc Bound alertmanager-pv 2Gi RWX alertmanager 6d2h
persistentvolumeclaim/grafana-pvc Bound grafana-pv 10Gi RWX grafana 20h
persistentvolumeclaim/pushgateway-pvc Bound pushgateway-pv 2Gi RWX pushgateway 6d2h
persistentvolumeclaim/server-pvc Bound server-pv 8Gi RWX server 6d2h
```
### Insall Grafana
Using `helm install`.
```bash
# helm install --name grafana stable/grafana
NAME: grafana
LAST DEPLOYED: Tue Jul 16 12:17:44 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME AGE
grafana 0s
==> v1beta1/PodSecurityPolicy
grafana 0s
grafana-test 0s
==> v1/Secret
grafana 0s
==> v1/ServiceAccount
grafana 0s
grafana-test 0s
==> v1/ClusterRoleBinding
grafana-clusterrolebinding 0s
==> v1beta1/Role
grafana 0s
==> v1beta1/RoleBinding
grafana 0s
==> v1/RoleBinding
grafana-test 0s
==> v1beta2/Deployment
grafana 0s
==> v1/ConfigMap
grafana 0s
grafana-test 0s
==> v1/ClusterRole
grafana-clusterrole 0s
==> v1/Role
grafana-test 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
grafana-5f774787b6-bglch 0/1 Init:0/1 0 0s
NOTES:
1. Get your 'admin' user password by running:
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
grafana.default.svc.cluster.local
Get the Grafana URL to visit by running these commands in the same shell:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services grafana)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
3. Login with the password from step 1 and the username: admin
```
Pod is running.
```bash
s# kubectl get pod
NAME READY STATUS RESTARTS AGE
alpine 1/1 Running 0 5d1h
cmk-cluster-init-pod 0/1 Completed 0 11d
cmk-init-install-discover-pod-node1 0/2 Completed 0 11d
cmk-init-install-discover-pod-node2 0/2 Completed 0 11d
cmk-reconcile-nodereport-ds-node1-9z8tv 2/2 Running 0 11d
cmk-reconcile-nodereport-ds-node2-8x65s 2/2 Running 0 11d
cmk-webhook-deployment-57d9594bbb-bgkzs 1/1 Running 0 11d
grafana-5f774787b6-bglch 1/1 Running 0 2m12s
node-feature-discovery-d9sc5 0/1 Completed 0 11d
node-feature-discovery-pfwcw 0/1 Completed 0 11d
ohmygod-prometheus-alertmanager-c6bcd684f-2d22t 2/2 Running 0 6d1h
ohmygod-prometheus-kube-state-metrics-5b9d56c94-kj5m8 1/1 Running 0 6d1h
ohmygod-prometheus-node-exporter-djvt2 1/1 Running 0 6d1h
ohmygod-prometheus-node-exporter-wk5sb 1/1 Running 0 6d1h
ohmygod-prometheus-pushgateway-f96878884-snzlm 1/1 Running 0 6d1h
ohmygod-prometheus-server-5c49bdb488-c4wsn 2/2 Running 0 6d1h
testpod1 1/1 Running 0 8d
```
Service is running.
```bash
s# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cmk-webhook-service ClusterIP 10.233.49.205 <none> 443/TCP 11d
grafana NodePort 10.233.39.113 <none> 80:30300/TCP 2m51s
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 11d
ohmygod-prometheus-alertmanager ClusterIP 10.233.52.194 <none> 80/TCP 6d1h
ohmygod-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 6d1h
ohmygod-prometheus-node-exporter ClusterIP None <none> 9100/TCP 6d1h
ohmygod-prometheus-pushgateway ClusterIP 10.233.55.66 <none> 9091/TCP 6d1h
ohmygod-prometheus-server NodePort 10.233.50.171 <none> 80:32350/TCP 6d1h
```
We could see the port 30300 is listening.
```bash
s# netstat -tupln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 31110/systemd-resol
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 35783/sshd
tcp 0 0 127.0.0.1:35557 0.0.0.0:* LISTEN 1368/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1368/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 4447/kube-proxy
tcp 0 0 100.86.1.141:10250 0.0.0.0:* LISTEN 1368/kubelet
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 73663/kube-schedule
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 70258/etcd
tcp 0 0 100.86.1.141:2379 0.0.0.0:* LISTEN 70258/etcd
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 385/kube-controller
tcp 0 0 100.86.1.141:2380 0.0.0.0:* LISTEN 70258/etcd
tcp6 0 0 :::10256 :::* LISTEN 4447/kube-proxy
tcp6 0 0 :::10257 :::* LISTEN 385/kube-controller
tcp6 0 0 :::22 :::* LISTEN 35783/sshd
tcp6 0 0 :::30300 :::* LISTEN 4447/kube-proxy
tcp6 0 0 :::32350 :::* LISTEN 4447/kube-proxy
tcp6 0 0 :::6443 :::* LISTEN 73701/kube-apiserve
tcp6 0 0 :::9100 :::* LISTEN 65480/node_exporter
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
udp 0 0 127.0.0.53:53 0.0.0.0:* 31110/systemd-resol
```
### Login into Grafana
Login Page.

Find the grafana password with your <grafana_release_name>, here I use `x-grafana`.
```bash
# kubectl get secret --namespace default x-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
AmeQNzQmYHcaojEETuiOIQXRlROLPCpbksMSfe84
```
- Use user/pwd = admin/AmeQNzQmYHcaojEETuiOIQXRlROLPCpbksMSfe84
Successfully see the Grafana Page.

### Change admin password
- http://100.86.1.141:30300/profile/password
- http://100.86.1.141:30300/admin/users/edit/1
***
## Check
***
## Note
### Show containters in a pod
``` bash
# kubectl get po -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name
NAME CONTAINERS
alpine alpine
cmk-cluster-init-pod cmk-cluster-init-pod
cmk-init-install-discover-pod-node1 install,discover
cmk-init-install-discover-pod-node2 install,discover
cmk-reconcile-nodereport-ds-node1-9z8tv reconcile,nodereport
cmk-reconcile-nodereport-ds-node2-8x65s reconcile,nodereport
cmk-webhook-deployment-57d9594bbb-bgkzs cmk-webhook
node-feature-discovery-d9sc5 node-feature-discovery
node-feature-discovery-pfwcw node-feature-discovery
ohmygod-prometheus-alertmanager-c6bcd684f-2d22t prometheus-alertmanager,prometheus-alertmanager-configmap-reload
ohmygod-prometheus-kube-state-metrics-5b9d56c94-kj5m8 prometheus-kube-state-metrics
ohmygod-prometheus-node-exporter-djvt2 prometheus-node-exporter
ohmygod-prometheus-node-exporter-wk5sb prometheus-node-exporter
ohmygod-prometheus-pushgateway-f96878884-snzlm prometheus-pushgateway
ohmygod-prometheus-server-5c49bdb488-c4wsn prometheus-server-configmap-reload,prometheus-server
testpod1 appcntr1
```
***