# ECK on AKS – Step-by-Step Installation Guide > This document is a **clean, reproducible, end-to-end guide** to install > **Elastic Cloud on Kubernetes (ECK)** on **AKS**, including: > > - Elasticsearch > - Kibana > - Fleet Server > - Elastic Agent (DaemonSet) > - Kubernetes logs & metrics > - Data retention to protect disk usage > > Tested with: > - Elastic Stack 8.13.x > - ECK 2.13.x > - AKS --- ![image](https://hackmd.io/_uploads/HJZrYOuvWe.png) ## 0. Prerequisites ```bash kubectl version --short kubectl config current-context kubectl auth can-i '*' '*' --all-namespaces ``` Namespaces: ```bash kubectl create ns elastic || true kubectl create ns elastic-system || true ``` --- ## 1. Install ECK Operator ### Option A – Helm (recommended if already using Helm) ```bash helm repo add elastic https://helm.elastic.co helm repo update helm upgrade --install eck-operator elastic/eck-operator -n elastic-system --create-namespace ``` Verify: ```bash kubectl -n elastic-system get pods kubectl get crd | grep elastic ``` --- ### Option B – Official Manifests (simple & explicit) ```bash kubectl apply -f https://download.elastic.co/downloads/eck/2.13.0/crds.yaml kubectl apply -f https://download.elastic.co/downloads/eck/2.13.0/operator.yaml ``` --- ## 2. Deploy Elasticsearch (ECK) ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: es namespace: elastic spec: version: 8.13.4 http: tls: selfSignedCertificate: disabled: false nodeSets: - name: default count: 1 config: node.store.allow_mmap: false ``` Apply & verify: ```bash kubectl apply -f elasticsearch.yaml kubectl -n elastic get elasticsearch kubectl -n elastic get pods ``` --- ## 3. Deploy Kibana (ECK) > ⚠️ **Do NOT use `server.basePath` at first** > ECK readiness probe is hardcoded to `/login`. ```yaml apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: kb namespace: elastic spec: version: 8.13.4 count: 1 elasticsearchRef: name: es config: xpack.fleet.enabled: true xpack.fleet.agents.enabled: true xpack.fleet.fleetServerHosts: - https://fleet-server-agent-http.elastic.svc:8220 ``` ```bash kubectl apply -f kibana.yaml kubectl -n elastic get kibana kubectl -n elastic get pods ``` --- ## 4. Get Elastic Password ```bash kubectl -n elastic get secret es-es-elastic-user \ -o jsonpath='{.data.elastic}' | base64 -d ``` --- ## 5. Initialize Fleet ```bash PASS=$(kubectl -n elastic get secret es-es-elastic-user -o jsonpath='{.data.elastic}' | base64 -d) KB_POD=$(kubectl -n elastic get pod -l kibana.k8s.elastic.co/name=kb -o jsonpath='{.items[0].metadata.name}') kubectl -n elastic exec -it $KB_POD -- bash -lc \ 'curl -sk -u "elastic:'"$PASS"'" -H "kbn-xsrf: true" \ -X POST https://localhost:5601/api/fleet/setup' ``` Expected: ```json {"isInitialized":true} ``` --- ## 6. Create Agent Policies ### Fleet Server Policy ```bash curl -sk -u "elastic:$PASS" -H "kbn-xsrf: true" \ -X POST https://localhost:5601/api/fleet/agent_policies \ -H "Content-Type: application/json" \ -d '{ "name": "fleet-server-policy", "namespace": "default", "has_fleet_server": true, "monitoring_enabled": ["logs","metrics"] }' ``` ### Kubernetes (pdng) Policy ```bash curl -sk -u "elastic:$PASS" -H "kbn-xsrf: true" \ -X POST https://localhost:5601/api/fleet/agent_policies \ -H "Content-Type: application/json" \ -d '{ "name": "k8s-pdng-logs", "namespace": "pdng", "monitoring_enabled": ["logs","metrics"] }' ``` Save the returned `id` values. --- ## 7. Install Integration Packages ```bash # Kubernetes curl -sk -u "elastic:$PASS" -H "kbn-xsrf: true" \ -X POST https://localhost:5601/api/fleet/epm/packages/kubernetes \ -H "Content-Type: application/json" -d '{"force":true}' # System curl -sk -u "elastic:$PASS" -H "kbn-xsrf: true" \ -X POST https://localhost:5601/api/fleet/epm/packages/system \ -H "Content-Type: application/json" -d '{"force":true}' ``` > ⚠️ First time: **create Kubernetes integration via Kibana UI**, then export JSON. ### 7.5 Create Kubernetes Integration via Kibana UI (Necessary in first time set up) Create Kubernetes Integration via Kibana UI ```bash= Kibana UI → Fleet → Agent Policies → k8s-pdng-logs → Add integration → Kubernetes ``` 1️⃣ Namespace ```bash= Namespace: pdng ``` ⚠️ 這個 namespace 會變成 index suffix → logs-kubernetes.container_logs-pdng ✔ 成功判斷 在 policy 頁面你會看到: ```bash= Integrations: - Kubernetes - System -- not necessary ``` --- ## 8. Deploy Fleet Server (Agent – Deployment) ```yaml apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: fleet-server namespace: elastic spec: version: 8.13.4 mode: fleet fleetServerEnabled: true policyID: <FLEET_SERVER_POLICY_ID> kibanaRef: name: kb elasticsearchRefs: - name: es deployment: replicas: 1 ``` ```bash kubectl apply -f fleet-server.yaml kubectl -n elastic get agent ``` --- ## 9. Deploy Elastic Agent (DaemonSet) ### Elastic Agent – Fleet Mode (DaemonSet) ```yaml= apiVersion: agent.k8s.elastic.co/v1alpha1 kind: Agent metadata: name: elastic-agent-pdng namespace: elastic spec: version: 8.13.4 mode: fleet kibanaRef: name: kb fleetServerRef: name: fleet-server policyID: "<K8S_AGENT_POLICY_ID>" daemonSet: podTemplate: spec: serviceAccountName: elastic-agent automountServiceAccountToken: true hostNetwork: true ... ``` apply: ```bash kubectl apply -f elastic-agent-pdng.yaml kubectl -n elastic get agent ``` Verify ```bash kubectl -n elastic get agent kubectl -n elastic get ds kubectl -n elastic get pods -l agent.k8s.elastic.co/name=elastic-agent-pdng ``` ### --- ## 10. MUST DO – Re-Save Kubernetes Integration > Even if UI shows enabled → **SAVE AGAIN** ``` Kibana → Fleet → Agent Policies → Kubernetes → Save ``` This triggers a new policy revision. --- ## 11. Verify Data Streams ```bash curl -sk -u "elastic:$PASS" \ https://es-es-http.elastic.svc:9200/_data_stream \ | jq -r '.data_streams[].name' | egrep "logs-kubernetes|metrics-kubernetes" ``` --- ## 12. Discover (Kibana) - Data view: `logs-*`, `metrics-*` - KQL: ```kql kubernetes.namespace: "pdng" ``` --- ## 13. Data Retention (Disk Protection) Data retention and disk watermark settings must be applied only after data streams are created and actively ingesting data. Applying them too early may result in no-op or overwritten settings. Keep only 6 hour of logs or Cluster disk watermark: ```bash curl -sk -u "elastic:${PASS}" -H 'Content-Type: application/json' \ -X PUT "https://es-es-http.elastic.svc:9200/_data_stream/logs-kubernetes.container_logs-pdng/_lifecycle" \ -d '{ "data_retention": "6h" }' curl -sk -u "elastic:${PASS}" -H 'Content-Type: application/json' \ -X PUT "https://es-es-http.elastic.svc:9200/_data_stream/metrics-kubernetes.container_logs-pdng/_lifecycle" \ -d '{ "data_retention": "6h" }' curl -sk -u "elastic:${PASS}" -H 'Content-Type: application/json' \ -X PUT "https://es-es-http.elastic.svc:9200/_cluster/settings" \ -d '{ "persistent": { "cluster.routing.allocation.disk.watermark.low": "75%", "cluster.routing.allocation.disk.watermark.high": "85%", "cluster.routing.allocation.disk.watermark.flood_stage": "90%" } }' ``` --- ## 14. Common Pitfalls | Issue | Cause | |-----|------| | No logs | Integration not re-saved | | No metrics | SA token not mounted | | 403 errors | Missing cluster RBAC | | Kibana not Ready | basePath misused | --- ## ✅ Result - Fleet Server: green - Elastic Agent: green - logs-kubernetes* present - metrics-kubernetes* present - Disk safe 🎉