# K8s&helm&GKE@GCP ## Create GKE cluster 0. Get GCP account 1. Open Cloud Shell ```shell= gcloud projects list gcloud config set project {Project-Name} gcloud config set compute/zone asia-east1-b gcloud container clusters create cool-chicken ``` ![](https://i.imgur.com/b4KKwVo.png) [GKE release notes](https://cloud.google.com/kubernetes-engine/docs/release-notes) 其他配置說明可參考[這裡](https://ithelp.ithome.com.tw/articles/10193961) 也可以參考官方文件,輸入 `gcloud container clusters create --help` 即可閱讀 這樣就可以成功創出一個 GKE cluster ## Connect to GKE [Install gcloud](https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu?hl=zh-tw) ```shell= # Create environment variable for correct distribution export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" # Add the Cloud SDK distribution URI as a package source echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list # Import the Google Cloud Platform public key curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - # Update the package list and install the Cloud SDK sudo apt-get update && sudo apt-get install google-cloud-sdk # Login and init gcloud gcloud init gcloud container clusters get-credentials cool-chicken \ --zone asia-east1-b \ --project indigo-night-266711 ``` login to gcloud with {YOUR_ACCOUNT} ::: info Once you successfully init gcloud, you can use `gcloud container clusters get-credentials....` to login your k8s every time. ::: ### How to check your kubectl config ``` cat ~/.kube/config ``` ## kubectl k8s 控制台 [Offical doc](https://kubernetes.io/docs/tasks/tools/install-kubectl/) ```shell= sudo apt-get update && sudo apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubectl ``` Set alias for kubectl ```shell= alias k='kubectl' ``` ### Create your first pod Create your namespace ```shell= kubectl create namespace ${MY_NAMESPACE_NAME} kubectl get namespace ``` Switch to your namespace ```shell= kubectl config set-context --current --namespace ${MY_NAMESPACE_NAME} ``` Query the resource under your namespace ```shell= # 查看預設namespace下的所有pod kubectl get pod # 查看kube-system namespace下的所有pod kubectl get pod -n kube-system # 查看所有namespace下的所有pod kubectl get pod -A ``` Create a random-logger in your namespace ```shell= kubectl run random-logger --image=chentex/random-logger ``` Check the logger in your random-logger ```shell= kubectl get deployment kubectl get pod kubectl logs -f {POD_NAME} ``` Delete pod ```shell= kubectl delete pod {POD_NAME} kubectl get pod ``` Why pod still exist? Because deployment restart it. Let's delete the deployment ```shell= kubectl get deployment kubectl delete deployment {DEPLOYMENT_NAME} ``` > If you want to interact with Pod, > try `kubectl run ubuntu -it --rm --image=ubuntu:16.04` ### Debugging with kubectl ```shell= kubectl exec -i -t {POD_NAME} -- sh kubectl attach {POD_NAME} ``` :::info exec 會執行 pod 內的 container 的指定程式 attach 會進入 pod 內 container 的 main process ::: ### Create a http server ```shell= kubectl run http --image=katacoda/docker-http-server:latest --port=80 kubectl port-forward {POD_NAME} 8000:80 curl localhost:8000 ``` ### Create a http server with service kubectl expose 會為 deployment 建立一個 service ```shell= kubectl expose deployment http --target-port=80 --type=NodePort kubectl get service ``` Check service load balance ```shell= kubectl scale deployment http --replicas=3 kubectl get deployment http kubectl describe service http # Endpoints: (contains 3 endpoints) ``` :::info Service 後面可以對應多個 deployment,但通常都一對一。 ::: test load balance with busybox ```shell= kubectl run busybox -it --rm --image=busybox --restart=Never ping http # cluster-ip 172.20.102.166 will be shown wget -O - http wget -O - http wget -O - http ``` ### EKS 上的芳鄰 每個 deployment 在該 EKS cluster 的內部都會自動註冊 DNS,所有該 cluster 內的 pod 都可以以此連線到其他 pod。 e.g. cross namespace access `<service-name>.<namespace-name>.svc.cluster.local` In busybox: ```shell= ping http.{YOUR_NAMESPACE}.svc.cluster.local wget -O - http.{YOUR_NAMESPACE}.svc.cluster.local wget -O - http.{YOUR_NAMESPACE}.svc.cluster.local wget -O - http.{YOUR_NAMESPACE}.svc.cluster.local exit ``` > 這部分不用設 service 也會生效 ### deploy resource with yaml ## 使用yaml manifest部屬 ---- ### Deployment manifest ```yaml= # deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1 # deployment name spec: replicas: 1 template: metadata: labels: app: webapp1 # service selector spec: containers: - name: webapp1 # pod name image: katacoda/docker-http-server:latest ports: - containerPort: 80 ``` ```shell= kubectl create -f deployment.yaml kubectl get deployment kubectl describe deployment webapp1 ``` ---- ### Service manifest ```yaml= # service.yaml apiVersion: v1 kind: Service metadata: name: webapp1 spec: type: NodePort ports: - port: 80 selector: app: webapp1 # service selector ``` ```shell= kubectl create -f service.yaml kubectl get service kubectl describe service webapp1 ``` ---- ### Ingress manifest ```yaml= # ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapp1 spec: backend: serviceName: webapp1 servicePort: 80 ``` ```shell= kubectl create -f ingress.yaml kubectl get ingress ``` > You can check the status of http-balancer in [here](https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list?project=indigo-night-266711&hl=zh-tw). ### Test your webapp Open browser ``` http://{ADDRESS} ``` ### How ingress work ![](https://i.imgur.com/qSBJcop.png) NGINS is for aws, google use their own http-balancer ## Helm ### install helm ```shell # install with binary wget https://get.helm.sh/helm-v3.0.2-linux-amd64.tar.gz mkdir ~/download/helm tar -C ~/download/helm -xzf helm-v3.0.2-linux-amd64.tar.gz mv ~/download/helm/linux-amd64/helm /usr/local/bin/ # Add repo list helm repo add stable https://kubernetes-charts.storage.googleapis.com/ ``` install with local helm chart ```shell= git clone https://github.com/helm/charts.git cd charts/stable/hackmd helm dependency update . helm install hackmd . ``` ```shell= helm install stable/hackmd hackmd kubectl port-forward {YOUR_POD} 8080:3000 ``` ## reference * [k8s Offical doc](https://kubernetes.io/docs/tasks/tools/install-kubectl/) * [Google GKE doc](https://cloud.google.com/kubernetes-engine/docs) * [k8s workshop Pt 0](https://hackmd.io/@QPF2k-LMR6SQ-qHvUe6lNA/S1Hps_fnB) * [Day 27 - 漫步在雲端:在GCP 中建立k8s 叢集 - iT 邦幫忙](https://ithelp.ithome.com.tw/articles/10193961) kube ops view