## 前提 For production purposes, it is recommended that: - If you have only one node in your cluster, you need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage. - If you have multiple nodes in your cluster, for each node you need **at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage**. - You have a cluster that uses Kubernetes v1.28 or newer. - You have installed the kubectl CLI. - Your Kubernetes cluster must have access to the internet, because Kubernetes needs to be able to fetch images. To pull from a private registry, see Deploying images from a private container registry. > #### 如果我只有兩個 nodes 按 knative 的 requirements 我可以是 2 個 2 CPUs 4GB Mem 的節點嗎?(總共 4 CPUs, 8 GB Mem 小於單節點 6 CPUs, 大於 6GB Mem) > 根據 Knative 的 requirements,如果你有兩個 nodes,每個 node 至少需要 2 顆 CPU、4GB 記憶體、20GB 硬碟空間。所以你的兩個節點配置是這樣的話,完全符合最低需求。這樣的架構能支援 Knative 在多節點 Kubernetes cluster 上正常運作。 --- ## 安裝 ### crd https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/#install-the-knative-serving-component ```bash kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.17.0/serving-crds.yaml ``` ```bash kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.17.0/serving-core.yaml ``` ### 安裝 kourier ```bash kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.17.0/kourier.yaml ``` ```bash kubectl patch configmap/config-network \ --namespace knative-serving \ --type merge \ --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}' ``` ### 移除 {{ .NAMESPACE }} 前綴 https://stackoverflow.com/questions/58106188/changing-public-url-in-knative-service-definition ```bash kubectl patch configmap config-network \ --namespace knative-serving \ --patch '{"data": {"domainTemplate": "{{.Name}}.{{.Domain}}"}}' ``` > 還原方法 > ```bash > kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"domainTemplate": "{{.Name}}.{{.Namespace}}.{{.Domain}}"}}' > ``` ### 設定 DNS ```bash kubectl patch configmap config-domain \ --namespace knative-serving \ --patch '{"data": {"localhost": ""}}' ``` > 重設 knative DNS 的方法 > ```bash > kubectl delete configmap config-domain --namespace knative-serving > ``` > ```bash > kubectl create configmap config-domain --namespace knative-serving --from-literal='localhost=' > ``` > .svc.cluster.local 其實只在 cluster 內部適用 >  -- ## 驗證 ### 啟動一個 serving 服務 在 Knative Serving 中,HTTP Service 的 port 通常是 8080。這是因為 Knative 預設會把你的應用程式 Pod 的容器流量導向 port 8080。 但如果你的應用程式用的是別的 port(例如 3000、5000、80),你可以在 Knative Service 定義 (yaml) 中設定環境變數 PORT,或確保你的容器程式監聽正確的 port。 ### k3d-registry:5111/eventing ```bash kn service create display-event \ -n default \ --image k3d-registry:5000/eventing \ --annotation autoscaling.knative.dev/minScale=1 ``` #### mendhak/http-https-echo ```bash kn service create echo-server \ -n default \ --image mendhak/http-https-echo \ --annotation autoscaling.knative.dev/minScale=1 ``` #### nginx ```bash kubectl apply -f - <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: echo-server spec: template: metadata: annotations: autoscaling.knative.dev/minScale: "1" spec: containers: - image: nginx ports: - containerPort: 80 # 因為預設是 8080 所以這邊一定要指定 EOF ``` > 如果事後想要更新 minScale: > ```bash > kn service update -n default echo-server --annotation autoscaling.knative.dev/minScale=1 > ``` --- ## projects > domains > modules Domain Name 只對應到 Namespace。 一座 cluster 代表一個專案的某一個環境(prod),該專案以及該環境最好保持獨立(與其他環境甚至專案隔離)。 > 也就意味著:projects > domains > modules -- ## ready: 2/2  --- ## readiness probe vs liveness probe 在 Knative Serving 中,liveness probe 和 readiness probe 都是基於 Kubernetes 機制運作,但它們的作用不太一樣: Liveness probe:檢查服務是否「活著」。如果 liveness probe 失敗,Kubernetes 會把整個容器重啟。 Readiness probe:檢查服務是否「準備好接收流量」。只有 readiness probe 成功,Knative 才會將流量導向這個 Pod。 從 liveness probe 成功到 readiness probe 成功之間,Knative 允許的時間是你自己設定的。 --- ## knative + websocket   Kourier 本身 不支援 session affinity(sticky sessions)。Kourier 是 Knative Serving 的預設輕量級 ingress,它基於 Envoy,但目前 Kourier 的設定並沒有把 Envoy 的 session affinity 功能(像是基於 cookie 或 IP hash)暴露出來。
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up