# 部屬 SUSE AI on RKE2 with Rancher ## PreRequest Rancher * CPU: 4C * MEM: 16G * Disk: 70 G SSD nvme * OS: SLES 15 SP6 RKE2 * CPU: 4C * MEM: 16G * Disk: 70 G SSD nvme * GPU: NVIDIA GeForce RTX 3070 * OS: SLES 15 SP6 * VM 需先加入 GPU ![image](https://hackmd.io/_uploads/SyYeUAXIJe.png) ## Software * 已安裝 Rancher * 已透過 Rancher 生成 RKE2 ## 1. Install Nvidia Container Runtime on RKE2 ``` #1. SSH RKE2 node $ ssh <user>@<RKE2 node ip> #2. 安裝 gcc 與 kernel-devel $ sudo zypper mr -ea && \ sudo zypper -n in gcc kernel-devel nvidia-container-toolkit # 檢查所有對應版本是否一致 $ rpm -qa | grep -E "kernel-default-devel|kernel-default|kernel-devel|kernel-macros" #3. 安裝 nvidia driver $ v="550.142" && \ curl -L -O https://us.download.nvidia.com/XFree86/Linux-x86_64/"$v"/NVIDIA-Linux-x86_64-"$v".run && \ sudo sh NVIDIA-Linux-x86_64-"$v".run # 4. 重啟主機 $ sudo reboot ``` ``` # 5. test $ nvidia-smi Thu Jan 2 17:01:56 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.142 Driver Version: 550.142 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 3070 Off | 00000000:00:10.0 Off | N/A | | 0% 36C P8 16W / 270W | 1MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+ ``` ``` # 6. Update RKE2 Containerd Configuration ## 6.1. Check $ ls -l /usr/bin/nvidia-container-runtime -rwxr-xr-x 1 root root 4319136 Oct 20 2022 /usr/bin/nvidia-container-runtime ## 6.2. Backup $ sudo cp /var/lib/rancher/rke2/agent/etc/containerd/config.toml . ## 6.3. Update $ sudo cp /var/lib/rancher/rke2/agent/etc/containerd/config.toml /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl $ sudo nano -Yone /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl ## 將以下內容加到檔案的最後面 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes."nvidia"] runtime_type = "io.containerd.runc.v2" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes."nvidia".options] BinaryName = "/usr/bin/nvidia-container-runtime" ## 6.4. 重啟主機 $ sudo reboot # 6.5. 刪除 tmpl 檔 $ sudo rm /var/lib/rancher/rke2/agent/etc/containerd/config.toml.tmpl ``` ## 2. Install gpu-operator on Rancher ### 2.1. Add Nvidia Helm Repository * 點選 Cluster -> Apps -> Repositories -> Create ``` name: nvidia Index URL: https://helm.ngc.nvidia.com/nvidia ``` ![image](https://hackmd.io/_uploads/rJjkKRvPC.png) ![image](https://hackmd.io/_uploads/HyygFAwvA.png) * 添加完後,檢查狀態是 Active ![image](https://hackmd.io/_uploads/Sy9ZFCww0.png) ### 2.2. 開始安裝 gpu-operator * Apps -> Charts -> 搜尋 gpu-operator -> 點選 gpu-operator ![image](https://hackmd.io/_uploads/r1OmKCPvC.png) * 點選右上角 Install 按鈕 ![image](https://hackmd.io/_uploads/BkGNKCDvC.png) > Namespace: Create a new namespace -> 輸入 `nvidia` > Name: 輸入 `nvidia` > 勾選 Customize Helm options before install > Next ![image](https://hackmd.io/_uploads/B1C8YAPPA.png) * 修改 Containerd 資訊 ``` toolkit: enabled: true env: - name: CONTAINERD_CONFIG value: /var/lib/rancher/rke2/agent/etc/containerd/config.toml - name: CONTAINERD_SOCKET value: /run/k3s/containerd/containerd.sock ``` ![image](https://hackmd.io/_uploads/rJedY0vD0.png) > 點選右下角 `Install` 按鈕安裝 > > 檢查所有的 Pod 狀態都是 Running > > Workloads -> Pods -> nvidia namespace * 檢查 ``` $ kubectl -n nvidia get po NAME READY STATUS RESTARTS AGE gpu-feature-discovery-gb455 1/1 Running 1 62s gpu-operator-5cb66cf8df-ck74b 1/1 Running 0 79s nvidia-container-toolkit-daemonset-6snlm 1/1 Running 0 63s nvidia-cuda-validator-z9bwc 0/1 Completed 0 47s nvidia-dcgm-exporter-4klw5 1/1 Running 0 63s nvidia-device-plugin-daemonset-6d4wn 1/1 Running 1 63s nvidia-node-feature-discovery-gc-59fb9585fb-vn9mc 1/1 Running 0 79s nvidia-node-feature-discovery-master-579469ff77-kjfwc 1/1 Running 0 79s nvidia-node-feature-discovery-worker-86hp8 1/1 Running 0 79s nvidia-operator-validator-h77t5 1/1 Running 0 63s ``` * 檢查 nvidia runtimeclass ``` $ kubectl get runtimeclass NAME HANDLER AGE nvidia nvidia 3m37s ``` ### 2.3. Test GPU Pods ``` # 1. sample1 $ cat << EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: cuda-vectoradd spec: restartPolicy: OnFailure runtimeClassName: nvidia containers: - name: cuda-vectoradd image: "nvidia/samples:vectoradd-cuda11.2.1" resources: limits: nvidia.com/gpu: 1 EOF # 2. 測試成功 log 資訊 $ kubectl logs cuda-vectoradd [Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done # 3. Delete Sample1 pod $ kubectl delete pod cuda-vectoradd # 4. sample2 $ cat << EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: nbody-gpu-benchmark namespace: default spec: restartPolicy: OnFailure runtimeClassName: nvidia containers: - name: cuda-container image: nvcr.io/nvidia/k8s/cuda-sample:nbody args: ["nbody", "-gpu", "-benchmark"] resources: limits: nvidia.com/gpu: 1 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: all EOF # 5. 測試成功 log 資訊 $ kubectl logs nbody-gpu-benchmark Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance. -fullscreen (run n-body simulation in fullscreen mode) -fp64 (use double precision floating point values for simulation) -hostmem (stores simulation data in host memory) -benchmark (run benchmark to measure performance) -numbodies=<N> (number of bodies (>= 1) to run in simulation) -device=<d> (where d=0,1,2.... for the CUDA device to use) -numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation) -compare (compares simulation results running once on the default GPU and once on the CPU) -cpu (run n-body simulation on the CPU) -tipsy=<file.bin> (load a tipsy model file for simulation) NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. > Windowed mode > Simulation data stored in video memory > Single precision floating point simulation > 1 Devices used for simulation MapSMtoCores for SM 8.9 is undefined. Default to use 128 Cores/SM MapSMtoArchName for SM 8.9 is undefined. Default to use Ampere GPU Device 0: "Ampere" with compute capability 8.9 > Compute 8.9 CUDA device: [NVIDIA GeForce RTX 4060 Ti] 34816 bodies, total time for 10 iterations: 19.339 ms = 626.784 billion interactions per second = 12535.677 single-precision GFLOP/s at 20 flops per interaction # 6. Delete Sample2 pod $ kubectl delete pod nbody-gpu-benchmark ``` ## 3. 安裝 Local Path Provisioner 並設為預設 & 安裝 helm & 安裝 cert-manager * 安裝指令 ``` $ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml ``` ``` $ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete WaitForFirstConsumer false 108m ``` * 設定為預設 storageclass ``` $ kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' ``` * 安裝 helm ``` $ wget https://get.helm.sh/helm-v3.16.4-linux-amd64.tar.gz $ tar zxvf helm-v3.16.4-linux-amd64.tar.gz $ cp linux-amd64/helm /usr/local/bin/ $ helm version version.BuildInfo{Version:"v3.16.4", GitCommit:"7877b45b63f95635153b29a42c0c2f4273ec45ca", GitTreeState:"clean", GoVersion:"go1.22.7"} ``` * 安裝 cert-manager ``` $ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml $ helm repo add jetstack https://charts.jetstack.io $ helm install cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.11.0 $ kubectl get crds | grep cert-manager certificaterequests.cert-manager.io 2025-01-02T09:42:54Z certificates.cert-manager.io 2025-01-02T09:42:55Z challenges.acme.cert-manager.io 2025-01-02T09:42:54Z clusterissuers.cert-manager.io 2025-01-02T09:42:54Z issuers.cert-manager.io 2025-01-02T09:42:54Z orders.acme.cert-manager.io 2025-01-02T09:42:55Z ``` ## 4. Install Milvus & Ollama & Open WebUI with Helm ### 4.1. 安裝 Milvus * Milvus 是專為 AI 應用而設計的可擴展、高效能向量資料庫。它特別適合用於需要處理非結構化數據(例如圖像、視頻、音頻、文本嵌入)的應用場景,並提供高性能的相似性搜索功能。 > 需先登入 Application Collection 建立 token > https://apps.rancher.io/applications/milvus 可以參考以下建立 token : https://docs.apps.rancher.io/get-started/authentication/ ``` $ kubectl create namespace suse-ai $ kubectl create secret docker-registry application-collection \ --docker-server=dp.apps.rancher.io \ --docker-username=APPCO_USERNAME \ --docker-password=APPCO_USER_TOKEN \ -n suse-ai $ helm registry login dp.apps.rancher.io/charts \ -u APPCO_USERNAME \ -p APPCO_USER_TOKEN ``` * 查看所有 value.yaml 可設定的參數 ``` $ helm show values oci://dp.apps.rancher.io/charts/milvus global: # -- Global override for container image registry imageRegistry: "" # -- Global override for container image registry pull secrets imagePullSecrets: [] ## Expand the name of the chart nameOverride: "" ## Default fully qualified app name fullnameOverride: "" ## Enable or disable Milvus Cluster mode cluster: enabled: true image: all: registry: dp.apps.rancher.io repository: containers/milvus tag: 2.4.6 pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistryKeySecretName ...... ``` * 新增以下內容 ``` $ vim milvus_custom_overrides.yaml global: imagePullSecrets: - application-collection cluster: enabled: True standalone: persistence: persistentVolumeClaim: storageClass: local-path etcd: replicaCount: 1 persistence: storageClassName: local-path minio: mode: distributed replicas: 4 rootUser: "admin" rootPassword: "adminminio" persistence: storageClass: local-path resources: requests: memory: 1024Mi kafka: enabled: true name: kafka replicaCount: 3 broker: enabled: true cluster: listeners: client: protocol: 'PLAINTEXT' controller: protocol: 'PLAINTEXT' persistence: enabled: true annotations: {} labels: {} existingClaim: "" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi storageClassName: "local-path" ``` * 開始部屬 ``` $ helm upgrade --install milvus oci://dp.apps.rancher.io/charts/milvus \ -n suse-ai \ --version 4.2.2 -f milvus_custom_overrides.yaml ``` * 部屬完成 ``` $ kubectl -n suse-ai get po NAME READY STATUS RESTARTS AGE milvus-datacoord-6d579c6b9f-php4m 1/1 Running 0 4m1s milvus-datanode-787b5cb588-wp2bg 1/1 Running 0 4m1s milvus-etcd-0 1/1 Running 0 4m1s milvus-indexcoord-68874d968b-ws2qq 1/1 Running 0 4m1s milvus-indexnode-7c58df95df-r575m 1/1 Running 0 4m1s milvus-kafka-broker-0 1/1 Running 0 4m1s milvus-kafka-broker-1 1/1 Running 0 4m1s milvus-kafka-broker-2 1/1 Running 0 4m1s milvus-kafka-controller-0 1/1 Running 0 4m1s milvus-kafka-controller-1 1/1 Running 0 4m1s milvus-kafka-controller-2 1/1 Running 0 4m1s milvus-minio-0 1/1 Running 0 4m1s milvus-minio-1 1/1 Running 0 4m1s milvus-minio-2 1/1 Running 0 4m1s milvus-minio-3 1/1 Running 0 4m1s milvus-proxy-66698989c8-7b8jf 1/1 Running 0 4m1s milvus-querycoord-5579c6447c-xc6g6 1/1 Running 0 4m1s milvus-querynode-fb545998c-fp88t 1/1 Running 0 4m1s milvus-rootcoord-6f8b669f-vt25l 1/1 Running 0 4m1s ``` ### 4.2. 安裝 Ollama & Open WebUI * 參考 : https://apps.rancher.io/applications/open-webui * 新增以下內容 ``` $ vim owui_custom_overrides.yaml global: imagePullSecrets: - application-collection image: registry: dp.apps.rancher.io repository: containers/open-webui tag: 0.3.32 pullPolicy: IfNotPresent ollamaUrls: - http://open-webui-ollama.suse-ai.svc.cluster.local:11434 persistence: enabled: true storageClass: local-path ollama: enabled: true image: registry: dp.apps.rancher.io repository: containers/ollama tag: 0.3.6 pullPolicy: IfNotPresent ingress: enabled: false defaultModel: "gemma:2b" ollama: models: - "gemma:2b" - "llama3.1" gpu: enabled: true type: 'nvidia' number: 1 persistentVolume: enabled: true storageClass: local-path pipelines: enabled: false persistence: storageClass: local-path ingress: enabled: true class: "" annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" host: ollama.example.com tls: true extraEnvVars: - name: DEFAULT_MODELS value: "gemma:2b" - name: DEFAULT_USER_ROLE value: "user" - name: WEBUI_NAME value: "SUSE AI" - name: GLOBAL_LOG_LEVEL value: INFO - name: RAG_EMBEDDING_MODEL value: "sentence-transformers/all-MiniLM-L6-v2" - name: VECTOR_DB value: "milvus" - name: MILVUS_URI value: http://milvus.suse-ai.svc.cluster.local:19530 ``` ``` $ helm upgrade --install open-webui oci://dp.apps.rancher.io/charts/open-webui \ -n suse-ai \ --version 3.3.2 \ -f owui_custom_overrides.yaml ``` ``` $ kubectl -n suse-ai get ing NAME CLASS HOSTS ADDRESS PORTS AGE open-webui nginx ollama.example.com 192.168.11.100 80, 443 12h ``` * 創建一個帳號,登入 SUSE AI > https://ollama.example.com * 在左上角可以切換不同 model ![image](https://hackmd.io/_uploads/B1ZFnmrIyl.png) ![image](https://hackmd.io/_uploads/rkxsn4rI1l.png) ![image](https://hackmd.io/_uploads/ByVBCVS8yg.png) ## 環境清除 ``` $ helm -n suse-ai uninstall open-webui $ helm -n suse-ai uninstall milvus $ for i in $(kubectl -n suse-ai get pvc -o name); do kubectl -n suse-ai delete ${i}; done $ kubectl delete ns suse-ai ``` ## 參考 https://documentation.suse.com/suse-ai/1.0/html/AI-deployment-intro/index.html