# How to Authenticate Users in Kubernetes with Keycloak OIDC Provider for Secure Cluster Access ## 1. Preface 本篇文章會介紹如何用 keycloak 作為 OIDC provider 來保護 Kubernetes API Server,並搭配 Kubrenetes RBAC 控制來自 keycloak users 或 group 的訪問。 ## 2. 痛點和解決方案 靜態 Kubeconfig 檔案不僅拖慢團隊效率,更會導致資安漏洞。 對於 AI 開發人員、研究人員和 Kubernetes 管理員而言,存取叢集往往意味著必須費心管理散落在多台機器與使用者之間的長期憑證。這不僅效率低落、難以稽核,更是反覆出現且令人頭痛的合規難題。 **解決方案: 利用 OpenID Connect (OIDC) 將 Kubernetes 連接至現有的身分提供者 (Identity Provider)**。 ## 2. 什麼是 OpenID Connect (OIDC) OpenID Connect (OIDC) 是建構於 OAuth 2.0 之上的一種身分認證層 (Identity Layer)。**它允許客戶端(例如 Kubernetes)透過外部身分提供者 (IdP) 來驗證使用者身分,並以標準化、符合 REST 風格的方式取得基本的使用者資料。** :::spoiler 理解 OAuth、OIDC 以及 JWT 它們之間的關係。 > OAuth(Open Authorization)是一個關於授權(authorization)的開放網路標準,允許使用者授權第三方應用程式存取他們儲存在其他服務提供者上的信息,而不需要將使用者名稱和密碼提供給第三方應用。 OAuth 在全世界得到了廣泛的應用,目前的版本是 2.0 。 > > OpenID Connect (OIDC) 是一種身分驗證協議,基於 OAuth 2.0 系列規格。 OAuth2 提供了`access_token` 來解決授權第三方用戶端存取受保護資源的問題,OpenID Connect 在這個基礎上提供了 `id_token` 來解決第三方用戶端標識使用者身分的問題。 > > OpenID Connect 的核心在於,在 OAuth2 的授權流程中,同時提供使用者的身份資訊(id_token)給到第三方用戶端。`id_token` 使用 JWT(JSON Web Token)格式進行封裝,得益於 JWT 的自包含性,緊湊性以及防篡改機制等特點,使得 `id_token` 可以安全地傳遞給第三方客戶端程序並且易於驗證。 > > JSON Web Token(JWT)是一個開放的行業標準(RFC 7519),它定義了一種簡潔的、自包含的協議格式,用於在通信雙方間傳遞 JSON 對象,傳遞的信息經過數字簽名可以被驗證和信任。想要了解 JWT 的詳細內容請參考 [JWT(JSON Web Token)](https://mp.weixin.qq.com/s?__biz=MzkxOTIwMDgxMg==&mid=2247484310&idx=1&sn=fea9b5bb11623fd447847e35b4057a56&scene=21&poc_token=HIlpYGmj5mtklXk6OiegATPG-sz_tRGy6aaPyfpt)。 ::: 在 Kubernetes 環境中,OIDC 取代了靜態且由本地管理的憑證,轉而信任組織現有的認證系統。這能實現: * **強化安全性**:以短效 Token (Short-lived tokens) 取代靜態的 kubeconfig 檔案。 * **加速人員異動流程**:透過統一的身分生命週期管理,加快人員到職啟用 (Onboarding) 與離職停用 (Deprovisioning) 的速度。 * **內建合規性**:具備集中式認證與可稽核性 (Auditability)。 ## 3. 靜態 Kubeconfig 檔案的問題 許多 Kubernetes 叢集仍仰賴手動分發 Kubeconfig 檔案,其中包含的憑證有效期可能長達數月,甚至永久有效。這導致了以下問題: * **資安風險**:設定檔一旦遭竊,攻擊者即可在未被察覺的情況下持續存取叢集。 * **維運阻礙**:原本在 kubeconfig 檔案中,user client 憑證要一年換一次,且憑證輪替 (Credential rotation) 的過程緩慢容易出錯。 * **擴展性瓶頸**:隨著團隊規模擴大,管理個別使用者的設定檔將變得難以掌控。 ## 4. OIDC 如何解決此問題 將 OIDC 與 Kubernetes 整合,讓您能透過以下方式,解決手動分發 kubeconfig 檔案的問題: * **移除 user client 憑證**:每一次的連線階段 (Session) 都必須透過 IdP 進行認證。 * **實施角色權限控制 (RBAC)**:將 IdP 的群組對應 (Map) 至 Kubernetes 的 RBAC 角色。 * **集中身分管理**:在 IdP 中統一管理所有的使用者和群組,而非讓設定散落在各處的設定檔中。 ## 5. Keycloak 介紹 keycloak 是一個開源的、面向現代應用程式和服務的 IAM(身分認證和存取控制)解決方案。 Keycloak 提供了單一登入(SSO)功能,支援 **OpenID Connect**、**OAuth 2.0**、**SAML 2.0** 等協議,同時 Keycloak 也支援整合不同的身分認證服務,例如 LDAP、Active Directory、Github、Google 和 Facebook 等等。 --- # <font color=blue>開始部署和初始化設定 HA Keycloak</font> ## 1. 先決條件 - 部署好一套 Kubernetes 叢集,我使用的叢集版本是 `v1.34.2`。 - 已在作業系統中安裝 `kubectl` 和 `wget` 工具。 - 具有 `cluster-admin` 的管理權限。 - 可執行 `kubectl auth whoami` 確認 - 預期執行結果: ``` ATTRIBUTE VALUE Username kubernetes-admin Groups [kubeadm:cluster-admins system:authenticated] Extra: authentication.kubernetes.io/credential-id [X509SHA256=e0692cad816cb69cf9f8685f14343bd5f06618cb6ddbae57e70632ea595064dc] ``` - 已在 Kuebernetes 中安裝 ingress,在我的環境中是安裝 `ingress-nginx` - 可執行 `kubectl get ingressclasses` 確認 - 預期執行結果: ``` NAME CONTROLLER PARAMETERS AGE nginx k8s.io/ingress-nginx <none> 37d ``` - 本篇文章測試的 K8s 節點資訊如下 ``` NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kubeadm-c1.kubeantony.com Ready control-plane 37d v1.34.2 10.200.8.21 <none> Ubuntu 24.04.3 LTS 6.8.0-88-generic cri-o://1.34.3 kubeadm-c2.kubeantony.com Ready control-plane 37d v1.34.2 10.200.8.22 <none> Ubuntu 24.04.3 LTS 6.8.0-71-generic cri-o://1.34.3 kubeadm-c3.kubeantony.com Ready control-plane 37d v1.34.2 10.200.8.23 <none> Ubuntu 24.04.3 LTS 6.8.0-71-generic cri-o://1.34.3 kubeadm-w1.kubeantony.com Ready logging,worker 37d v1.34.2 10.200.8.24 <none> Ubuntu 24.04.3 LTS 6.8.0-71-generic cri-o://1.34.3 kubeadm-w2.kubeantony.com Ready logging,worker 37d v1.34.2 10.200.8.25 <none> Ubuntu 24.04.3 LTS 6.8.0-71-generic cri-o://1.34.3 kubeadm-w3.kubeantony.com Ready logging,worker 37d v1.34.2 10.200.8.26 <none> Ubuntu 24.04.3 LTS 6.8.0-71-generic cri-o://1.34.3 ``` ## 2. 安裝 Keycloak Operator 1. 使用 ssh 連線進可以執行 `kubectl` 的 K8s 管理主機 2. 建立並切換工作目錄 ``` mkdir -p ${HOME}/k8s/kubeadm/addon/keycloak; cd ${HOME}/k8s/kubeadm/addon/keycloak ``` 3. 建立 Kustomize 目錄結構 ``` mkdir -p {env/test,resource/base-v26.4.7/operator/crd} ``` 4. 檢視當前目錄結構 ``` tree -d . ``` 執行結果: ``` . ├── env │   └── test ## 用來存放在測試環境中,keycloak 相關的 yaml 檔 └── resource └── base-v26.4.7 └── operator ## 用來存放 keycloak operator 相關的 yaml 檔 └── crd 7 directories ``` 5. 下載 keycloak operator 的 crd yaml 檔 ``` wget -P resource/base-v26.4.7/operator/crd \ https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.4.7/kubernetes/keycloaks.k8s.keycloak.org-v1.yml wget -P resource/base-v26.4.7/operator/crd \ https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.4.7/kubernetes/keycloakrealmimports.k8s.keycloak.org-v1.yml ``` 6. 撰寫 keycloak operator 的 namespace yaml 檔 ``` cat <<EOF > resource/base-v26.4.7/operator/keycloak-operator-ns.yaml apiVersion: v1 kind: Namespace metadata: name: keycloak EOF ``` 7. 下載 keycloak operator 的 deployment yaml 檔 ``` wget -O resource/base-v26.4.7/operator/keycloak-operator-deployment.yaml \ https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.4.7/kubernetes/kubernetes.yml ``` 8. 撰寫 keycloak operator 的 kustomization yaml 檔 ``` cat <<EOF > resource/kustomization.yaml kind: Kustomization resources: - ./base-v26.4.7/operator/crd/keycloaks.k8s.keycloak.org-v1.yml - ./base-v26.4.7/operator/crd/keycloakrealmimports.k8s.keycloak.org-v1.yml - ./base-v26.4.7/operator/keycloak-operator-ns.yaml - ./base-v26.4.7/operator/keycloak-operator-deployment.yaml namespace: keycloak EOF ``` 9. 測試部署 keycloak operator 是否有誤 ``` kubectl apply -k resource/ --dry-run=server ``` 執行結果: ``` Warning: resource namespaces/keycloak is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. namespace/keycloak configured (server dry run) customresourcedefinition.apiextensions.k8s.io/keycloakrealmimports.k8s.keycloak.org created (server dry run) customresourcedefinition.apiextensions.k8s.io/keycloaks.k8s.keycloak.org created (server dry run) serviceaccount/keycloak-operator created (server dry run) role.rbac.authorization.k8s.io/keycloak-operator-role created (server dry run) clusterrole.rbac.authorization.k8s.io/keycloak-operator-clusterrole created (server dry run) clusterrole.rbac.authorization.k8s.io/keycloakcontroller-cluster-role created (server dry run) clusterrole.rbac.authorization.k8s.io/keycloakrealmimportcontroller-cluster-role created (server dry run) rolebinding.rbac.authorization.k8s.io/keycloak-operator-role-binding created (server dry run) rolebinding.rbac.authorization.k8s.io/keycloak-operator-view created (server dry run) rolebinding.rbac.authorization.k8s.io/keycloakcontroller-role-binding created (server dry run) rolebinding.rbac.authorization.k8s.io/keycloakrealmimportcontroller-role-binding created (server dry run) clusterrolebinding.rbac.authorization.k8s.io/keycloak-operator-clusterrole-binding created (server dry run) service/keycloak-operator created (server dry run) deployment.apps/keycloak-operator created (server dry run) ``` 10. 確認沒問題後,正式部署 ``` kubectl apply -k resource/ ``` 11. 確認 keycloak operator pods 運作狀態 ``` kubectl -n keycloak get pods ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE keycloak-operator-59c9ff54-h9wvv 1/1 Running 0 2m13s ``` ## 2. 安裝 local path provisioner (如果環境已有 StorageClass 可跳過此步驟) 1. 部署 local path provisioner ``` kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml ``` 2. 檢視 local path provisioner 的 pods 運作狀態 ``` kubectl -n local-path-storage get pods ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE local-path-provisioner-5cf8df5d54-wtl8k 1/1 Running 1 (12h ago) 12h ``` 3. 檢視 storageclass 名稱 ``` kubectl get sc ``` 執行結果: ``` NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 107d ``` ## 3. 透過 CloudNativePG Operator 安裝 PostgreSQL cluster 1. 建立並切換工作目錄 ``` mkdir -p ${HOME}/k8s/kubeadm/addon/cloudNativePG; cd ${HOME}/k8s/kubeadm/addon/cloudNativePG ``` 2. 建立 Kustomize 目錄結構 ``` mkdir -p {env/test,resource/base-v1.28.0/operator} ``` 3. 檢視當前目錄結構 ``` tree -d . ``` 執行結果: ``` . ├── env │   └── test └── resource └── base-v1.28.0 └── operator 6 directories ``` 4. 下載 CloudNativePG Operator yaml 檔 ``` wget -P resource/base-v1.28.0/operator \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.0.yaml ``` 5. 撰寫 CloudNativePG Operator 的 kustomization yaml 檔 ``` cat <<EOF > resource/kustomization.yaml kind: Kustomization resources: - ./base-v1.28.0/operator/cnpg-1.28.0.yaml EOF ``` 6. 測試部署 CloudNativePG Operator ``` kubectl apply -k resource/ --dry-run=server ``` 執行結果: ``` namespace/cnpg-system created (server dry run) customresourcedefinition.apiextensions.k8s.io/backups.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/clusterimagecatalogs.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/databases.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/failoverquorums.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/imagecatalogs.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/publications.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.cnpg.io created (server dry run) customresourcedefinition.apiextensions.k8s.io/subscriptions.postgresql.cnpg.io created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-database-editor-role created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-database-viewer-role created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-manager created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-publication-editor-role created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-publication-viewer-role created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-subscription-editor-role created (server dry run) clusterrole.rbac.authorization.k8s.io/cnpg-subscription-viewer-role created (server dry run) clusterrolebinding.rbac.authorization.k8s.io/cnpg-manager-rolebinding created (server dry run) mutatingwebhookconfiguration.admissionregistration.k8s.io/cnpg-mutating-webhook-configuration created (server dry run) validatingwebhookconfiguration.admissionregistration.k8s.io/cnpg-validating-webhook-configuration created (server dry run) Error from server (Invalid): error when creating "resource/": CustomResourceDefinition.apiextensions.k8s.io "poolers.postgresql.cnpg.io" is invalid: metadata.annotations: Too long: may not be more than 262144 bytes Error from server (NotFound): error when creating "resource/": namespaces "cnpg-system" not found Error from server (NotFound): error when creating "resource/": namespaces "cnpg-system" not found Error from server (NotFound): error when creating "resource/": namespaces "cnpg-system" not found Error from server (NotFound): error when creating "resource/": namespaces "cnpg-system" not found ``` 7. 部署 CloudNativePG Operator ``` kubectl apply -k resource/ --server-side ``` 8. 檢視 CloudNativePG Operator 的 pods 運作狀態 ``` kubectl -n cnpg-system get pods ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE cnpg-controller-manager-6b9f78f594-8g2qt 1/1 Running 0 40s ``` 9. 撰寫 PostgreSQL cluster 的 namespace yaml 檔 ``` cat <<EOF > env/test/cnpg-ns.yaml apiVersion: v1 kind: Namespace metadata: name: cnpg EOF ``` 10. 撰寫 PostgreSQL cluster 儲存初始化資料庫的使用者帳密的 secret yaml 檔 ``` cat <<EOF > env/test/cnpg-user-secret.yaml apiVersion: v1 data: password: YmlncmVk username: YmlncmVk kind: Secret metadata: name: cnpg-bigred namespace: cnpg EOF ``` 11. 撰寫 PostgreSQL cluster 的 cluster yaml 檔 ``` cat <<EOF > env/test/cnpg-cluster.yaml apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: psql namespace: cnpg spec: instances: 3 imageName: ghcr.io/cloudnative-pg/postgresql:17.5 bootstrap: initdb: database: keycloak owner: bigred secret: name: cnpg-bigred storage: storageClass: local-path size: 1Gi EOF ``` 12. 撰寫 PostgreSQL cluster 的 pooler yaml 檔 ``` cat <<EOF > env/test/cnpg-pooler.yaml apiVersion: postgresql.cnpg.io/v1 kind: Pooler metadata: name: pooler-rw namespace: cnpg spec: cluster: name: psql instances: 3 type: rw pgbouncer: poolMode: session parameters: max_client_conn: "1000" default_pool_size: "10" EOF ``` 13. 撰寫 PostgreSQL cluster 的 kustomizte yaml 檔 ``` cat <<EOF > env/test/kustomization.yaml kind: Kustomization resources: - ./cnpg-ns.yaml - ./cnpg-user-secret.yaml - ./cnpg-cluster.yaml - ./cnpg-pooler.yaml EOF ``` 14. 測試部署 PostgreSQL cluster ``` kubectl apply -k env/test/ --dry-run=server ``` 執行結果: ``` namespace/cnpg created (server dry run) Error from server (NotFound): error when creating "env/test/": namespaces "cnpg" not found Error from server (NotFound): error when creating "env/test/": namespaces "cnpg" not found Error from server (NotFound): error when creating "env/test/": namespaces "cnpg" not found ``` 15. 正式部署 PostgreSQL cluster ``` kubectl apply -k env/test/ ``` 執行結果: ``` namespace/cnpg created secret/cnpg-bigred created cluster.postgresql.cnpg.io/psql created pooler.postgresql.cnpg.io/pooler-rw created ``` 16. 檢視 PostgreSQL cluster pods 運作狀態 ``` kubectl -n cnpg get pods -o wide ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pooler-rw-8949649fd-9fw6z 1/1 Running 0 8m9s 10.244.4.204 kubeadm-w2 <none> <none> pooler-rw-8949649fd-h47mt 1/1 Running 0 8m9s 10.244.3.106 kubeadm-w1 <none> <none> pooler-rw-8949649fd-n862s 1/1 Running 0 8m9s 10.244.5.208 kubeadm-w3 <none> <none> psql-1 1/1 Running 0 7m2s 10.244.3.1 kubeadm-w1 <none> <none> psql-2 1/1 Running 0 6m5s 10.244.5.164 kubeadm-w3 <none> <none> psql-3 1/1 Running 0 4m59s 10.244.4.229 kubeadm-w2 <none> <none> ``` 17. 檢視 PostgreSQL cluster service ``` kubectl -n cnpg get svc ``` 執行結果: ``` NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pooler-rw ClusterIP 10.96.28.116 <none> 5432/TCP 6m43s psql-r ClusterIP 10.96.216.168 <none> 5432/TCP 6m44s psql-ro ClusterIP 10.96.3.136 <none> 5432/TCP 6m44s psql-rw ClusterIP 10.96.21.102 <none> 5432/TCP 6m44s ``` ## 4. 安裝 cert-manager 1. 建立並切換工作目錄 ``` mkdir -p ${HOME}/k8s/kubeadm/addon/cert-manager; cd ${HOME}/k8s/kubeadm/addon/cert-manager ``` 2. 建立 Kustomize 目錄結構 ``` mkdir -p {env/test,resource/base-v1.19.2} ``` 3. 檢視當前目錄結構 ``` tree -d . ``` 執行結果: ``` . ├── env │   └── test └── resource └── base-v1.19.2 5 directories ``` 4. 下載 cert-manager yaml 檔 ``` wget -P resource/base-v1.19.2 \ https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml ``` 5. 撰寫 cert-manager 基本湯底的 kustomize yaml ``` cat <<EOF > resource/kustomization.yaml kind: Kustomization resources: - ./base-v1.19.2/cert-manager.yaml EOF ``` 6. 撰寫測試區 cert-manager 的 kustomize yaml ``` cat <<EOF > env/test/kustomization.yaml kind: Kustomization resources: - ../../resource/ EOF ``` 7. 測試部署 cert-manager ``` kubectl apply -k env/test/ --dry-run=server ``` 8. 正式部署 cert-manager ``` kubectl apply -k env/test/ ``` 9. 確認 cert-mangaer pods 運作狀態 ``` kubectl -n cert-manager get pods ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE cert-manager-548f7cf98c-v47q9 1/1 Running 1 (14h ago) 14h cert-manager-cainjector-8798f647f-8f2xl 1/1 Running 1 (14h ago) 14h cert-manager-webhook-6c8678dc46-hw5r4 1/1 Running 1 (14h ago) 14h ``` ## 5. 透過 Keycloak Operator 安裝 HA Keycloak 1. 切換工作目錄 ``` cd ${HOME}/k8s/kubeadm/addon/keycloak ``` 2. 撰寫 keycloak namespace yaml 檔 ``` cat <<EOF > env/test/keycloak-ns.yaml apiVersion: v1 kind: Namespace metadata: name: keycloak EOF ``` 3. 撰寫 keycloak 儲存初始化管理員的使用者帳密 secret yaml 檔 ``` cat <<EOF > env/test/keycloak-bootstrapAdmin-user-secret.yaml apiVersion: v1 kind: Secret metadata: name: keycloak-bootstrapadmin-user namespace: keycloak type: kubernetes.io/basic-auth stringData: username: admin password: admin EOF ``` 4. 撰寫 keycloak 要串接 postgresql 的帳號密碼的 secret yaml 檔 ``` cat <<EOF > env/test/cnpg-user-secret.yaml apiVersion: v1 data: password: YmlncmVk username: YmlncmVk kind: Secret metadata: name: cnpg-bigred namespace: keycloak EOF ``` 5. 撰寫 keycloak yaml 檔 ``` cat <<EOF > env/test/keycloak.yaml apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak namespace: keycloak spec: instances: 3 env: - name: KC_PROXY value: 'edge' bootstrapAdmin: user: secret: keycloak-bootstrapadmin-user db: vendor: postgres host: pooler-rw.cnpg.svc.cluster.local usernameSecret: name: cnpg-bigred key: username passwordSecret: name: cnpg-bigred key: password http: httpEnabled: true ingress: enabled: false hostname: hostname: keycloak.kubeantony.com proxy: headers: xforwarded EOF ``` 6. 撰寫 keycloak 憑證 yaml 檔 ``` cat <<EOF > env/test/keycloak-cert-manager-cert.yaml apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer namespace: cert-manager spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: root-ca namespace: cert-manager spec: isCA: true duration: 87600h # 10 years renewBefore: 720h # 30d commonName: root-ca secretName: root-ca-secret issuerRef: name: selfsigned-issuer kind: Issuer --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: keycloak-ca-clusterissuer spec: ca: secretName: root-ca-secret EOF ``` 7. 撰寫 keycloak ingress yaml 檔 ``` cat <<EOF > env/test/keycloak-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak namespace: keycloak annotations: cert-manager.io/cluster-issuer: keycloak-ca-clusterissuer nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: 'true' nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "INGRESS_ROUTE" nginx.ingress.kubernetes.io/session-cookie-hash: "sha1" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" spec: ingressClassName: nginx tls: - hosts: - keycloak.kubeantony.com secretName: keycloak-tls rules: - host: keycloak.kubeantony.com http: paths: - path: / pathType: Prefix backend: service: name: keycloak-service port: number: 8080 EOF ``` 8. 撰寫 keycloak kustomize yaml 檔 ``` cat <<EOF > env/test/kustomization.yaml kind: Kustomization resources: - ./keycloak-ns.yaml - ./cnpg-user-secret.yaml - ./keycloak-bootstrapAdmin-user-secret.yaml - ./keycloak-cert-manager-cert.yaml - ./keycloak.yaml - ./keycloak-ingress.yaml EOF ``` 9. 測試部署 keycloak ``` kubectl apply -k env/test/ --dry-run=server ``` 執行結果: ``` namespace/keycloak unchanged (server dry run) secret/cnpg-bigred created (server dry run) secret/keycloak-bootstrapadmin-user created (server dry run) certificate.cert-manager.io/root-ca unchanged (server dry run) clusterissuer.cert-manager.io/keycloak-ca-clusterissuer unchanged (server dry run) issuer.cert-manager.io/selfsigned-issuer unchanged (server dry run) keycloak.k8s.keycloak.org/keycloak created (server dry run) ingress.networking.k8s.io/keycloak created (server dry run) ``` 10. 正式部署 keycloak ``` kubectl apply -k env/test/ ``` 執行結果: ``` namespace/keycloak unchanged secret/cnpg-bigred created secret/keycloak-bootstrapadmin-user created certificate.cert-manager.io/root-ca created clusterissuer.cert-manager.io/keycloak-ca-clusterissuer created issuer.cert-manager.io/selfsigned-issuer created keycloak.k8s.keycloak.org/keycloak created ingress.networking.k8s.io/keycloak created ``` 11. 確認 keycloak 部署完成 ``` kubectl -n keycloak rollout status statefulset keycloak ``` 執行結果: ``` Waiting for 3 pods to be ready... Waiting for 2 pods to be ready... Waiting for 2 pods to be ready... Waiting for 1 pods to be ready... Waiting for 1 pods to be ready... partitioned roll out complete: 3 new pods have been updated... ``` 12. 確認 keycloak pods 運作狀態 ``` kubectl -n keycloak get pods ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE keycloak-0 1/1 Running 0 2m38s keycloak-1 1/1 Running 0 117s keycloak-2 1/1 Running 0 81s keycloak-operator-59c9ff54-h9wvv 1/1 Running 0 3h ``` 13. 檢視 keycloak ingress ``` kubectl -n keycloak get ing ``` 執行結果: ``` NAME CLASS HOSTS ADDRESS PORTS AGE keycloak nginx keycloak.kubeantony.com 10.200.8.24,10.200.8.25,10.200.8.26 80, 443 4d3h ``` 14. 打開瀏覽器,連線至 keycloak 管理網站,URL 是 ingress 的 `HOSTS` 的值,我的文章是 `keycloak.kubeantony.com` ![image](https://hackmd.io/_uploads/rkIeqHCNWe.png) 15. 登入 keycloak 管理網站,執行以下命令獲取帳號密碼 ``` kubectl -n keycloak get secrets keycloak-bootstrapadmin-user -o go-template='User: {{.data.username | base64decode}}{{"\n"}}Pass: {{.data.password | base64decode}}{{"\n"}}' ``` 執行結果: ``` User: admin Pass: admin ``` ![image](https://hackmd.io/_uploads/Skz33BC4bl.png) 按 `Sign In` 按鈕後,登入畫面如下: ![image](https://hackmd.io/_uploads/BksJprAV-g.png) ## 6. 初始化 Keycloak 設定 1. 建立永久的 admin 帳號 左側選單 Users -> Add User ![image](https://hackmd.io/_uploads/Sk8qkLA4Wl.png) 在 Username 欄位輸入:`administrator`,然後點選 `Create` 按鈕 ![image](https://hackmd.io/_uploads/B1iIeU0Nbg.png) 2. 幫 admin 帳號設定永久有效的密碼 點選 `Credentials` 按鈕 -> `Set password` 按鈕 ![image](https://hackmd.io/_uploads/BkXieU0Ebl.png) 在 Password 欄位輸入:`password` 在 Password confimation 欄位再次輸入:`password` 點選 `Save` 按鈕 ![image](https://hackmd.io/_uploads/SJrwbU0VZx.png) 點選 `Save password` 按鈕 ![image](https://hackmd.io/_uploads/Hy-aW8AEZe.png) 3. 設定 admin 權限給 `administrator` 使用者 點選 `Role mapping` 按鈕 -> `Assign role` 按鈕 -> `Realm roles` 按鈕 ![image](https://hackmd.io/_uploads/Bka8zUCEZe.png) 將所有權限都打勾,然後點選左下角 `Assign` 按鈕 ![image](https://hackmd.io/_uploads/HykcELC4Wl.png) --- # <font color=blue>設定 Keycloak 作為 Kubernetes API Server 的認證服務</font> ## 1. Keycloak 先備知識 在 Keycloak 中有以下幾個主要概念: * 領域(realms):領域管理一批使用者、憑證、角色、群組等等,不同領域之間的資源是相互隔離的,實現了多租戶的效果。 * 客戶端(clients):需接取 Keycloak 實現使用者認證的應用程式和服務,以下實作範例的 kube-apiserver 就會是 client。 * 使用者(users):使用者是能夠登入應用程式系統的實體,擁有相關的屬性,例如電子郵件、使用者名稱、地址、電話號碼和生日等等。 * 群組(groups):一組使用者的集合,你可以將一系列的角色賦予定義好的使用者群組,一旦某使用者屬於該使用者群組,那麼該使用者將獲得對應群組的所有角色權限。 * 證書(credential):Keycloak 用於驗證使用者的憑證,例如密碼、一次性密碼、證書、指紋等等。 ## 2. 建立 Keycloak Realm 1. 點選左側選單 `Manage realms` 按鈕 -> `Create realm` 按鈕 ![image](https://hackmd.io/_uploads/H1bSis1SZl.png) 2. Realm name 欄位輸入:`kubernetes-project` 點選左下角 `Create` 按鈕 ![image](https://hackmd.io/_uploads/S1V6jsJSWl.png) ## 3. 建立 keycloak User 1. 點選左側選單 `Users` 按鈕 -> `Create new user` ![image](https://hackmd.io/_uploads/SkolV2JrWe.png) 2. Username 欄位輸入:`bigred` Email 欄位輸入:`bigred@example.com` First name 欄位輸入:`bigred` Last name 欄位輸入:`bigred` 點選下方 `Create` 按鈕 ![image](https://hackmd.io/_uploads/H1f6RxgBWg.png) 3. 點選 `Credentials` 按鈕 -> `Set password` 按鈕 ![image](https://hackmd.io/_uploads/B1TML3Jrbx.png) 4. Password 欄位輸入: `bigred` Password confirmation: `bigred` Temporary 調整成 `Off` 點選 Save 按鈕 ![image](https://hackmd.io/_uploads/BySc8nyS-e.png) 5. 點選 `Save password` 按鈕 ![image](https://hackmd.io/_uploads/SkUWwhJHbe.png) 6. 點選左側選單 `Users` 按鈕 -> `Create new user` 7. Username 欄位輸入:`rbean` Email 欄位輸入:`rbean@example.com` First name 欄位輸入:`rbean` Last name 欄位輸入:`rbean` 點選下方 `Create` 按鈕 8. 點選 `Credentials` 按鈕 -> `Set password` 按鈕 9. Password 欄位輸入: `rbean` Password confirmation: `rbean` Temporary 調整成 `Off` 點選 Save 按鈕 10. 點選 `Save password` 按鈕 ## 4. 建立 keycloak Group 1. 點選左側選單 `Groups` 按鈕 -> `Create group` 按鈕 ![image](https://hackmd.io/_uploads/HyFAvnyr-l.png) 2. Name 欄位輸入:`admin` Description 欄位輸入:`Kubernetes cluster admin` 點選 `Create` 按鈕 ![image](https://hackmd.io/_uploads/HJTcO2kHWg.png) 3. 點選 `admin` ![image](https://hackmd.io/_uploads/BkhqK2kSbx.png) 4. 點選 `Members` 按鈕 -> `Add member` 按鈕 ![image](https://hackmd.io/_uploads/SJV0KnyBWx.png) 5. 勾選 `bigred` user -> 點選下方 `Add` 按扭 ![image](https://hackmd.io/_uploads/rJ0Qq21Hbl.png) 6. 點選左側選單 `Groups` 按鈕 -> `Create group` 按鈕 7. Name 欄位輸入:`developers` Description 欄位輸入:`Kubernetes namespace admin` 點選 `Create` 按鈕 8. 點選 `developers` 9. 點選 `Members` 按鈕 -> `Add member` 按鈕 10. 勾選 `rbean` user -> 點選下方 `Add` 按扭 ## 5. 建立 Keycloak Client 1. 在 `kubernetes-project` Realm 建立 Keycloak Client 點選左側選單 `Clients` 按鈕 -> `Create client` 按鈕 ![image](https://hackmd.io/_uploads/SJYYhskHbg.png) 2. Client ID 欄位輸入:`kube-apiserver` Name 欄位輸入:`kube-apiserver` 點選 `Next` 按鈕 ![image](https://hackmd.io/_uploads/BJ-M6sJSZg.png) 3. 勾選 `Client authentication` 點選 `Next` 按鈕 ![image](https://hackmd.io/_uploads/BJidCs1S-g.png) 4. Valid redirect URIs 欄位輸入:`urn:ietf:wg:oauth:2.0:oob` 點選 `Save` 按鈕 ![image](https://hackmd.io/_uploads/BJgJy3kBWg.png) > 什麼是 [urn:ietf:wg:oauth:2.0:oob](https://www.keycloak.org/docs/25.0.6/securing_apps/index.html#:~:text=be%20used%20instead.-,urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob,-If%20you%20cannot) ? > 當使用此重新導向 URI 時,Keycloak 會顯示一個頁面,授權碼 (code) 會同時出現在網頁標題 (title) 以及頁面上的文字方塊中。應用程式可以偵測瀏覽器標題的變更,或者由使用者手動複製該代碼並貼上至應用程式中。透過此重新導向 URI,使用者還可以使用不同的裝置來取得代碼,再將其貼回原本的應用程式。 ## 6. 添加 Group Membership 類型的 mapper 在 Keycloak 的 Client 設定中,Mapper 的核心功能可以簡單概括為:「決定 Token 裡面要包含什麼資料,以及這些資料要叫什麼名字」。 當使用者登入成功後,Keycloak 會發發 Token(如 ID Token 或 Access Token)給應用程式 (Client)。預設情況下,Token 裡只會有基本資料(如 username, email, sub)。 Group Membership Mapper: **把使用者所屬的 群組 (Groups) 放進 Token。在 Kubernetes OIDC 整合,這個非常重要,因為 K8s 通常是靠這個欄位來綁定 RBAC 權限的。** 1. 點選 `Client scopes` 按鈕 -> `kube-apiserver-dedicated` 按鈕 ![image](https://hackmd.io/_uploads/SyIpenyHWg.png) 2. 點選 `Configure a new mapper` 按鈕 ![image](https://hackmd.io/_uploads/BJTeb21Bbl.png) 3. 點選 `Group Membership` 按鈕 ![image](https://hackmd.io/_uploads/rkISBo1BWx.png) 4. Name 欄位輸入:`kube-groups` Token Claim Name 欄位輸入:`kube-groups` Full group path 設定為 `Off` Add to ID token 保持為 `On` Add to access token 設定為 `Off` Add to lightweight access token 保持為 `Off` Add to userinfo 設定為 `Off` Add to token introspection 設定為 `Off` ![image](https://hackmd.io/_uploads/Bk9UzhyHbg.png) ## 7. 延長 Token 時間 (可跳過) Keycloak 中設定的 `access_token` 和 `id_token` 的有效期限預設是 `5` 分鐘,為了方便後續的實作,這裡將令牌的有效期延長至 `30` 分鐘。 1. 點選左側選單 `Realm settings` 按鈕 -> 點選右側選單 `Access tokens` 按鈕 Access Token Lifespan 預設為 `5` 分鐘 ![image](https://hackmd.io/_uploads/rJ0CjnkSWl.png) 2. 將 Access Token Lifespan 調整成 `30` 分鐘 點選下方 `Save` 按鈕 ![image](https://hackmd.io/_uploads/rycFnhJHbl.png) ## 8. 查看 Keycloak Realm Endpoint 資訊 1. 點選左側選單 `Realm settings` 按鈕 -> `General` 按鈕 -> `Endpoints` 欄位的 `OpenID Endpoint Configuration` 超連結按鈕 ![image](https://hackmd.io/_uploads/H1snp2JHWg.png) 2. 勾選美化排版,記住 `issuer` 欄位的 url,稍後會用到 ![image](https://hackmd.io/_uploads/B1I1k6yHWl.png) ## 9. 永久設定 Kube-apiserver 參數 (K8s 版本升級時設定不會被原始值覆蓋) 1. 建立並切換工作目錄 ``` mkdir -p ${HOME}/k8s/kubeadm/config/; cd ${HOME}/k8s/kubeadm/config/ ``` 2. 匯出當前 K8s 叢集的設定檔 `kubeadm-config` ``` kubectl -n kube-system get cm kubeadm-config -o yaml > kubeadm-config-$(date '+%Y-%m-%d').yaml ``` 3. 備份 K8s 叢集的設定檔 ``` cp kubeadm-config-$(date '+%Y-%m-%d').yaml kubeadm-config-$(date '+%Y-%m-%d').yaml.bk ``` 4. 修改 K8s 叢集的設定檔,將 `kube-apiserver` 啟用 OpenID Connect 認證 ``` nano kubeadm-config-$(date '+%Y-%m-%d').yaml ``` 需修改的檔案內容如下: ``` data: ClusterConfiguration: | apiServer: extraArgs: - name: oidc-issuer-url value: "https://keycloak.kubeantony.com/realms/kubernetes-project" - name: oidc-client-id value: "kube-apiserver" - name: oidc-username-claim value: "preferred_username" - name: oidc-groups-claim value: "kube-groups" - name: oidc-ca-file value: "/etc/kubernetes/pki/keycloak-ca.crt" ``` 需要在 API Server 的啟動參數中新增下列設定: - `--oidc-issuer-url`:OpenID 發行者的 URL,僅接受 HTTPS 協定。若有設定,將用於驗證 OIDC JSON Web Token (JWT)。 - 在步驟 8 查看 Keycloak Realm Endpoint 資訊有得到這個值 - `--oidc-client-id`:OpenID Connect 客戶端的 Client ID。若已設定 `oidc-issuer-url`,則必須設定此項。 - 在步驟 5 建立 Keycloak Client 中有設定 client id 的值 - `--oidc-username-claim`:用來作為使用者名稱的 OpenID claim。請注意,除了預設值 (`sub`) 以外的宣告,無法保證其唯一性與不可變更性。 - 請依照以下步驟查看 1. 點選左側選單 `Client scopes` 按鈕 -> `profile` 按鈕 ![image](https://hackmd.io/_uploads/S1GM_6Jrbg.png) 2. 點選 `Mappers` 按鈕 -> `username` 按鈕 ![image](https://hackmd.io/_uploads/S1Xi_pJH-x.png) 3. 確認 `Token Claim Name` 欄位的值,本篇文章為 `preferred_username` ![image](https://hackmd.io/_uploads/r1dkKpyBbe.png) - `--oidc-groups-claim`: 若有提供,此為用來指定使用者群組的自訂 OpenID Connect claim 名稱。該宣告的值預期為字串或字串陣列。 - 在步驟 6 添加 Group Membership 類型的 mapper 中有設定 Token Claim Name - `--oidc-ca-file`:若有設定,OpenID 伺服器的憑證將由 `oidc-ca-file` 中的憑證授權單位 (CA) 進行驗證;否則,將使用主機本身的根憑證庫 (Root CA set)。 5. 獲取由 cert-manager 幫 keycloak 簽發的 Root CA 憑證 ``` kubectl -n cert-manager get secret root-ca-secret \ -o jsonpath='{.data.ca\.crt}' | base64 -d \ > ${HOME}/k8s/kubeadm/addon/keycloak/env/test/ca.pem ``` 6. 將 keycloak Root CA 憑證配發至 K8s 叢集中的**每一台** Control-plane node ``` scp ${HOME}/k8s/kubeadm/addon/keycloak/env/test/ca.pem <user>@<control-plane node 1 ip>:./ca.pem scp ${HOME}/k8s/kubeadm/addon/keycloak/env/test/ca.pem <user>@<control-plane node 2 ip>:./ca.pem scp ${HOME}/k8s/kubeadm/addon/keycloak/env/test/ca.pem <user>@<control-plane node 3 ip>:./ca.pem ``` 7. 在**每一台** Control-plane node 將 keycloak Root CA 憑證拷貝到先前 K8s 叢集設定檔中指定的目錄區 ``` ssh <user>@<control-plane node 1 ip> sudo cp ${HOME}/ca.pem /etc/kubernetes/pki/keycloak-ca.crt ssh <user>@<control-plane node 2 ip> sudo cp ${HOME}/ca.pem /etc/kubernetes/pki/keycloak-ca.crt ssh <user>@<control-plane node 3 ip> sudo cp ${HOME}/ca.pem /etc/kubernetes/pki/keycloak-ca.crt ``` 8. 回到 K8s 管理主機,模擬套用 Kubernetes 叢集設定檔,檢查設定檔語法是否正確以及是否能被叢集接受 ``` kubectl apply -f kubeadm-config-$(date '+%Y-%m-%d').yaml --dry-run=server ``` 執行結果: ``` Warning: resource configmaps/kubeadm-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/kubeadm-config configured (server dry run) ``` 9. 確認沒問題後,正式套用 Kubernetes 叢集設定檔 ``` kubectl apply -f kubeadm-config-$(date '+%Y-%m-%d').yaml ``` 執行結果: ``` Warning: resource configmaps/kubeadm-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. configmap/kubeadm-config configured ``` 10. 將叢集現有的 K8s 叢集設定檔提取出來,並儲存為本機檔案 `config.yaml` ``` kubectl -n kube-system get cm kubeadm-config \ -o jsonpath={.data.ClusterConfiguration} > /tmp/config.yaml ``` 11. 將 `config.yaml` 派送至**每一台** Control-plane node 的 `/tmp` 目錄區底下 ``` scp /tmp/config.yaml <user>@<control-plane node 1 ip>:/tmp/ scp /tmp/config.yaml <user>@<control-plane node 2 ip>:/tmp/ scp /tmp/config.yaml <user>@<control-plane node 3 ip>:/tmp/ ``` 12. 在**每一台** Control-plane node 修改 API Server 的啟動參數 ``` ssh <user>@<control-plane node 1 ip> sudo kubeadm init phase control-plane apiserver --config /tmp/config.yaml ssh <user>@<control-plane node 2 ip> sudo kubeadm init phase control-plane apiserver --config /tmp/config.yaml ssh <user>@<control-plane node 3 ip> sudo kubeadm init phase control-plane apiserver --config /tmp/config.yaml ``` 會在每一台 Control-plane 看到以下執行結果: ``` [control-plane] Creating static Pod manifest for "kube-apiserver" ``` 13. 回到 K8s 管理主機,確認 kube-apiserver pods 是否有重啟,並且狀態正常 ``` kubectl -n kube-system get pods -l "component=kube-apiserver" ``` 執行結果: ``` NAME READY STATUS RESTARTS AGE kube-apiserver-kubeadm-c1 1/1 Running 0 104s kube-apiserver-kubeadm-c2 1/1 Running 0 58s kube-apiserver-kubeadm-c3 1/1 Running 0 50s ``` ## 10. 設定 K8s RBAC 1. 建立 `developers` 群組專用的 K8s namespace ``` kubectl create ns project-1 ``` 2. 設定 `developers` 群組為特定 namespace 的 admin ``` kubectl create rolebinding developers-group-ns-admin \ --clusterrole=admin \ --group=developers \ --namespace=project-1 ``` 3. 設定 `admin` 群組為 K8s 叢集管理員 ``` kubectl create clusterrolebinding admin-group-cluster-admin \ --clusterrole=cluster-admin \ --group=admin ``` ## 11. 使用 `Kubelogin` 進行身份驗證 - 搭配瀏覽器模式 當執行 `kubectl` 指令時,`kubelogin` 會給一個超連結,當使用瀏覽器連線到該連結的網站,使用者需要輸入使用者名稱和密碼登入程序,認證通過後,kubelogin 會從認證伺服器取得一個 token,然後 `kubectl` 就可以使用該 token 和 K8s API Server 進行溝通,具體的流程圖如下: ![image](https://hackmd.io/_uploads/HJj6DyCEWe.png) 1. 安裝 `krew` kubectl plugin ``` ( set -x; cd "$(mktemp -d)" && OS="$(uname | tr '[:upper:]' '[:lower:]')" && ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" && KREW="krew-${OS}_${ARCH}" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" && tar zxvf "${KREW}.tar.gz" && ./"${KREW}" install krew ) echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> ~/.bashrc source ~/.bashrc ``` 2. 安裝 `kubelogin` kubectl plugin ``` kubectl krew install oidc-login ``` 執行結果: ``` Updated the local copy of plugin index. Installing plugin: oidc-login Installed plugin: oidc-login \ | Use this plugin: | kubectl oidc-login | Documentation: | https://github.com/int128/kubelogin | Caveats: | \ | | You need to setup the OIDC provider, Kubernetes API server, role binding and kubeconfig. | / / WARNING: You installed plugin "oidc-login" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk. ``` 3. 設定 kubectl 使用 oidc-test 帳號時,透過 OIDC 協議與 Keycloak 進行互動式認證以取得存取 Kubernetes 的 Token。 ``` kubectl config set-credentials oidc-web \ --exec-api-version=client.authentication.k8s.io/v1 \ --exec-interactive-mode=IfAvailable \ --exec-command=kubectl \ --exec-arg=oidc-login \ --exec-arg=get-token \ --exec-arg="--oidc-issuer-url=https://keycloak.kubeantony.com/realms/kubernetes-project" \ --exec-arg="--oidc-client-id=kube-apiserver" \ --exec-arg="--oidc-client-secret=EM06cnVYdLFjRa46DDgoBRl4NL7zhMxJ" \ --exec-arg="--oidc-redirect-url=urn:ietf:wg:oauth:2.0:oob" \ --exec-arg="--certificate-authority=/etc/kubernetes/pki/keycloak-ca.crt" \ --exec-arg="--grant-type=authcode-keyboard" ``` * `kubectl config set-credentials oidc-web`: * 在 kubeconfig 檔案中建立或更新名為 `oidc-web` 的使用者條目。 * `--exec-api-version` & `--exec-command`: * 指定使用 Kubernetes 的 `client.authentication.k8s.io/v1` API,並呼叫 `kubectl` 作為執行檔(實際是透過它來呼叫插件)。 * `--exec-arg=oidc-login`, `get-token`: * 指定實際執行的插件指令為 `oidc-login` (即 kubelogin),動作為獲取 Token。 * `--exec-arg="--oidc-issuer-url/client-id/secret"`: * 提供連線到 OIDC Provider (此處為 Keycloak) 所需的端點、客戶端 ID 與 secret。 * `--exec-arg="--grant-type=authcode-keyboard"`: * 指定認證流程為「鍵盤輸入授權碼」。使用者在瀏覽器登入後,必須**手動複製授權碼 (Auth Code)** 並貼回終端機,而非透過本機 Port 自動回傳。 * `--exec-arg="--oidc-redirect-url=urn:ietf:wg:oauth:2.0:oob"`: * 配合上述模式,告訴 Keycloak 驗證完成後顯示 Out-of-Band (OOB) 頁面(通常是顯示一段代碼),而非跳轉到特定的 URL。 * 請執行以下步驟得到 `oidc-client-secret` 的值 1. 點選左側選單 `Clients` 按鈕 -> `kube-apiserver` 按鈕 ![image](https://hackmd.io/_uploads/Hkld6CyHWg.png) 2. 點選 `Credentials` 按鈕 -> Client Secret 欄位右側的複製按鈕 ![image](https://hackmd.io/_uploads/SJvTpAyr-x.png) 執行結果: ``` User "oidc-web" set. ``` 4. 驗證存取 K8s ``` kubectl --user=oidc-web auth whoami ``` 執行結果: ```! Please visit the following URL in your browser: https://keycloak.kubeantony.com/realms/kubernetes-project/protocol/openid-connect/auth?access_type=offline&client_id=kube-apiserver&code_challenge=P4RQKK8ISp0lNuDjEdFh21JzyP-mXZvmw9NxPhgie8Y&code_challenge_method=S256&nonce=YXM_s1sD3u6mmsLQpe_Zg1Nsem-0OhBodCbORiJO5T4&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=openid&state=UYOI-OojjJRVk1rWr4m_BxVnHuu46ratNYmf2FZMWAA Enter code: ``` 記得先在 keycloak 登出 `administrator` 帳號 連線到命令給的驗證連結,並輸入以下內容: - Username or email: `bigred` - Password: `bigred` 點選 `Sign In` 按鈕 ![image](https://hackmd.io/_uploads/ry_ffJeS-l.png) 就會得到驗證碼,點擊右側的複製按鈕 ![image](https://hackmd.io/_uploads/BJ511ger-l.png) 5. 將驗證碼貼到終端機上 ``` Enter code: 229cea71-4d33-8bc1-4662-704dff97a9ac.172198d7-1020-1881-90f3-e75d092fb006.31751bd5-670a-4446-a183-6ea3386ea8e2 ``` 執行結果: ``` ATTRIBUTE VALUE Username https://keycloak.kubeantony.com/realms/kubernetes-project#bigred Groups [admin system:authenticated] Extra: authentication.kubernetes.io/credential-id [JTI=89287e03-0da6-d421-ea72-7adfd388e894] ``` 6. 確認 `bigred` 使用者是否具備整個 Kubernetes 叢集的完整超級管理員(cluster-admin)權限 ``` kubectl --user=oidc-web auth can-i '*' '*' -A ``` 執行結果 ``` yes ``` ## 12. 使用 `Kubelogin` 進行身份驗證 - 純命令模式 1. 設定 keycloak client 允許 Direct access grants 點選左側選單 Clients 按鈕 -> kube-apiserver ![image](https://hackmd.io/_uploads/H1-aNlxBWg.png) 點選右側選單 `Capability config` 按鈕 -> 勾選 `Direct access grants` 點選下方 `Save` 按鈕 ![image](https://hackmd.io/_uploads/BkVXregS-e.png) 2. 刪除 token cache ``` kubectl oidc-login clean ``` 3. 測試使用命令模式登入 ``` kubectl config set-credentials oidc-cli \ --exec-api-version=client.authentication.k8s.io/v1 \ --exec-interactive-mode=IfAvailable \ --exec-command=kubectl \ --exec-arg=oidc-login \ --exec-arg=get-token \ --exec-arg="--oidc-issuer-url=https://keycloak.kubeantony.com/realms/kubernetes-project" \ --exec-arg="--oidc-client-id=kube-apiserver" \ --exec-arg="--oidc-client-secret=EM06cnVYdLFjRa46DDgoBRl4NL7zhMxJ" \ --exec-arg="--certificate-authority=/etc/kubernetes/pki/keycloak-ca.crt" \ --exec-arg="--grant-type=password" ``` > `--exec-arg="--grant-type=password"`: 指定認證流程為 ROPC 模式。 `kubectl` 在執行時,直接在終端機提示使用者輸入 Username 和 Password,並直接將這些憑據發送給 Keycloak 進行驗證 執行結果: ``` User "oidc-cli" set. ``` 4. 驗證存取 K8s ``` kubectl --user=oidc-cli auth whoami ``` 執行結果: ``` Username: rbean Password: rbean ATTRIBUTE VALUE Username https://keycloak.kubeantony.com/realms/kubernetes-project#rbean Groups [developers system:authenticated] Extra: authentication.kubernetes.io/credential-id [JTI=43a41723-b6d7-95fe-543a-d8bf5069e6cb] ``` 5. 驗證 `rbean` 使用者是否能在其他 K8s namespace 建立 deployment ``` kubectl --user=oidc-cli auth can-i create deployment -n default ``` 執行結果: ``` no ``` 6. 驗證 `rbean` 使用者是否能在 `project-1` namespace 建立 deployment ``` kubectl --user=oidc-cli auth can-i create deployment -n project-1 ``` 執行結果: ``` yes ``` 7. 確認 `rbean` 使用者在其他 K8s namespace 擁有的權限 ``` kubectl --user=oidc-cli auth can-i --list -n default ``` 執行結果: ``` Resources Non-Resource URLs Resource Names Verbs selfsubjectreviews.authentication.k8s.io [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.k8s.io [] [] [create] [/api/*] [] [get] [/api] [] [get] [/apis/*] [] [get] [/apis] [] [get] [/healthz] [] [get] [/healthz] [] [get] [/livez] [] [get] [/livez] [] [get] [/openapi/*] [] [get] [/openapi] [] [get] [/readyz] [] [get] [/readyz] [] [get] [/version/] [] [get] [/version/] [] [get] [/version] [] [get] [/version] [] [get] ``` 8. 確認 `rbean` 使用者是否能獲得 K8s 叢集節點資訊 ``` kubectl --user=oidc-cli auth can-i get nodes ``` 執行結果: ``` Warning: resource 'nodes' is not namespace scoped no ``` 9. 將 oidc-cli user 的 kubeconfig 獨立出來 ``` kubectl config set-context oidc-context \ --cluster=topgun \ --user=oidc-cli \ --namespace=project-1 kubectl config view \ --context=oidc-context \ --minify \ --flatten \ > oidc-cli.kubeconfig ``` 10. 檢視 oidc-cli user 的 kubeconfig ``` kubectl --kubeconfig oidc-cli.kubeconfig config view ``` 執行結果: ``` apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.11.176:6443 name: topgun contexts: - context: cluster: topgun namespace: project-1 user: oidc-cli name: oidc-context current-context: oidc-context kind: Config users: - name: oidc-cli user: exec: apiVersion: client.authentication.k8s.io/v1 args: - oidc-login - get-token - --oidc-issuer-url=https://keycloak.kubeantony.com/realms/kubernetes-project - --oidc-client-id=kube-apiserver - --oidc-client-secret=EM06cnVYdLFjRa46DDgoBRl4NL7zhMxJ - --certificate-authority=/etc/kubernetes/pki/keycloak-ca.crt - --grant-type=password command: kubectl env: null interactiveMode: IfAvailable provideClusterInfo: false ``` > 可以看到沒有 user 的 client-certificate-data 跟 client-key-data,變成動態的到 keycloak 去取得 token # 參考文章 - [How to Implement OIDC Auth in Kubernetes for Secure Cluster Access](https://www.voltagepark.com/blog/how-to-use-oidc-for-secure-streamlined-kubernetes-access) - [在Kubernetes 中使用Keycloak OIDC Provider 對使用者進行身份驗證](https://cloud.tencent.com/developer/article/1983889) - [Authenticate to Kubernetes with Keycloak OIDC on K3s](https://geek-cookbook.funkypenguin.co.nz/kubernetes/oidc-authentication/keycloak/) - [kubelogin - github](https://github.com/int128/kubelogin/tree/master) - [Krew Installing - Krew Docs](https://krew.sigs.k8s.io/docs/user-guide/setup/install/)