Intro
===
###### tags: `SlinkyProject`
###### tags: `Kubernetes`, `k8s`, `app`, `slurm`, `SlinkyProject`, `platform`, `Rafay`
<br>
[TOC]
<br>
## SlinkyProject
> https://github.com/SlinkyProject
- ### github
- https://github.com/SlinkyProject/slurm-operator
- https://github.com/SlinkyProject/slurm-bridge
- https://github.com/SlinkyProject/slurm-client
- ...
- ### 安裝方式:
> memo: keda 是?
- [Slinky - QuickStart Guide](https://github.com/SlinkyProject/slurm-operator/blob/release-0.3/docs/quickstart.md)
- 先安裝 Slurm Operator
- 後安裝 Slurm Cluster
- 需修改 storageClass,以符合當前 K8s 環境的基礎設施
- 需修改 rootSshAuthorizedKeys,讓當前 control plane 節點可登入
- 如何檢查所產生的樣板?
> `helm install` 換成 `helm template`
```
helm template slurm oci://ghcr.io/slinkyproject/charts/slurm \
--values=values-slurm.yaml --version=0.3.0 --namespace=slurm --create-namespace
```
- ### 官方文件
- https://slinky.schedmd.com/docs/
- Overview
- slurm-bridge
- slurm-operator
- https://github.com/SlinkyProject/slurm-operator/tree/release-0.3/docs
[SlinkyProject] slurm-operator/docs/
- https://github.com/SlinkyProject/slurm-operator/blob/release-0.3/docs/quickstart.md
QuickStart Guide
- ### 其他
- https://slurm.schedmd.com/slinky.html
Slinky — Slurm and Kubernetes
- https://slurm.schedmd.com/MISC25/Slinky-KubeConEurope2025.pdf
KubeCon CloudNativeCon - Europe 2025
- **YouTube**
https://www.youtube.com/watch?v=gvp2uTilwrY
<br>
## 支援 Slinky 的平台
### Rafay 平台
- [Rafay 平台](https://rafay.co/ai-and-cloud-native-blog/self-service-slurm-clusters-on-kubernetes-with-rafay-gpu-paas)
<br>
## slurm-operator/docs/pyxis.md
> https://github.com/SlinkyProject/slurm-operator/blob/main/docs/pyxis.md
- ### gpt-o4-mini-high
這份文件是一篇針對 Slurm 叢集如何整合 Pyxis 與 Enroot 來執行容器化(尤其是含 NVIDIA GPU)作業的操作指南。
**核心目的**
- **安裝與啟用 Pyxis SPANK 插件**:在 Slurm 的 `plugstack.conf` 中載入 Pyxis 相關設定,讓 slurmctld/slurmd 能夠辨識並呼叫這個容器化插件。
- **設定節點與登入容器映像(NodeSets & Login Pods)**:示範如何為特定分區或節點群組指定含有 Pyxis/Enroot 函式庫的 OCI 映像,以及如何為登入容器開啟 `privileged` 權限,讓 Enroot 能正常匯入與運行映像檔。
- **驗證流程**:提供使用 `srun --partition=… --container-image=…` 送出作業,並觀察 Pyxis 成功匯入映像,以及在容器內執行命令(如 `grep PRETTY /etc/os-release`)的範例,確保整個容器化機制運作無誤。
總結來說,這篇指南的主要目的是幫助 Slurm 叢集管理員快速部署並測試 Pyxis + Enroot,以支援在 GPU 節點上執行 Docker/OCI 容器化工作負載。
<br>
---
<br>
## 官方 slide (v202407)
> https://slurm.schedmd.com/SLUG24/Slinky-Slurm-Operator.pdf (27頁)
> - https://www.schedmd.com/wp-content/uploads/2024/07/Slinky-20240731.pdf (18頁)
> - https://slurm.schedmd.com/SLUG24/Slinky-Slurm-Bridge.pdf (22頁)
> Slurm Bridge
> 
### What is Slinky?
A collection of projects and initiatives to enable Slurm on Kubernetes:
- Slurm-operator
Manage Slurm nodes in Kubernetes
- Slurm-bridge
Enable Slurm scheduling of Kubernetes Pods
- Kubernetes Tooling
- Helm Charts
- Container Images
- Future work
### 什麼是 Slinky?
Slinky 是一組專案與計畫,目標是在 Kubernetes 上支援 Slurm:
* Slurm-operator
在 Kubernetes 中管理 Slurm 節點
* Slurm-bridge
讓 Slurm 能排程 Kubernetes Pod
* Kubernetes 工具
* Helm Charts
* 容器映像檔
* 未來規劃
---
### HPC vs. Cloud Native - Historical Assumptions
- ### HPC
- Underlying software is mutable
- Users assume fine-grained control
- Users are often systems experts that understand infrastructure
- Have a tolerance for complexity
- Access to compute handled by a resource manager or scheduling system
- Users own the node entirely during computation
- Assumption of node homogeneity
- ### Cloud Native
- Underlying software is immutable
- Users are not systems experts, do not think in terms of parallel
- Limited tolerance for complexity
- Users share nodes
- Can introduce jitter
- Can blow through bandwidth
- Assumption of heterogeneous nodes
- Not a ton of attention given to network topology
### HPC 與雲原生(Cloud Native)— 歷史假設
* ### HPC
* 底層軟體是可變動的
* 使用者假設可進行細粒度控制
* 使用者通常是了解基礎設施的系統專家
* 能容忍系統複雜度
* 計算資源由資源管理器或排程系統負責控管
* 使用者在計算期間完全擁有節點
* 假設節點是同質性的
* ### 雲原生(Cloud Native)
* 底層軟體是不可變的
* 使用者不是系統專家,不以平行處理思維看待問題
* 對複雜度容忍度有限
* 使用者共享節點
* 可能導致抖動(jitter)
* 可能造成頻寬搶占
* 假設節點是異質性的
* 對網路拓撲較少關注
---

### Domain Pools
- Kubernetes manages its nodes, running a kubelet
- Slurm manages its nodes, running a slurmd
- Slinky tooling will manage scaling Slurm nodes
- Slurm Operator
### 範疇群組(Domain Pools)
* Kubernetes 管理自己的節點,節點上執行 kubelet
* Slurm 管理自己的節點,節點上執行 slurmd
* Slinky 工具將管理 Slurm 節點的擴展
* Slurm Operator
---
### Why Slurm Operator
- Kubernetes lacks fine-grained control of native resources (CPU, Memory)
- HPC and AI training workloads are inefficient
- Need to build the infrastructure to get this capability
- Ability to have fast scheduling that is not possible in kubelet
- Ability to use both Kubernetes and Slurm workloads on the same set of nodes
- Do not need to separate the clusters!
### 為什麼需要 Slurm Operator
* Kubernetes 缺乏對原生資源(CPU、記憶體)的細粒度控制
* HPC 及 AI 訓練工作負載效率不佳
* 需要建構基礎設施以實現這項能力
* 具備 kubelet 無法達成的快速排程能力
* 能在相同節點叢集上同時執行 Kubernetes 與 Slurm 工作負載
* 不需要分開建立不同的叢集!
---
### Slurm Operator
### Requirements
- Can run Slurm and Kubernetes workloads on pools of nodes
- Reconcile Kubernetes and Slurm as the source of truth
- Propagate Slurm node state bidirectionally
- Support dynamic scale-in and scale-out of Slurm nodes
- Support most Slurm Scheduling features
### 需求
* 能在節點池上同時執行 Slurm 與 Kubernetes 工作負載
* 將 Kubernetes 與 Slurm 狀態作為單一事實來源(source of truth)進行調和
* 雙向同步 Slurm 節點狀態
* 支援 Slurm 節點的動態縮減(scale-in)與擴展(scale-out)
* 支援大部分 Slurm 排程功能
---
### Restrictions
- Configure Kubernetes with static CPU management policy, which only allows for pinned cores, but not positioning or affinity
- Properly constrain hwloc view of the node
- Disable cgroups within Slurm
- Kubernetes does not natively allow delegation of cgroup sub-tree to pod
- Slurmd cannot constrain slurmstepd via cgroups
- Should configure Slurm partitions with OverSubscribe=Exclusive
- The slurmd (pod) can get Out of Memory (OOM) and killed because of user jobs!
- Pod-to-pod connections will still be through the regular Container Network Interface (CNI)
### 限制條件
* Kubernetes 配置為靜態 CPU 管理策略,只允許核心鎖定(pinned cores),但不支援位置定位或親和性(affinity)
* 必須正確限制節點的 hwloc 視圖
* 在 Slurm 中禁用 cgroups
* Kubernetes 本身不支援將 cgroup 子樹委派給 Pod
* slurmd 無法透過 cgroups 約束 slurmstepd
* Slurm 分區應設定為 OverSubscribe=Exclusive
* slurmd(Pod)可能因使用者工作導致記憶體不足(OOM)而被終止
* Pod 與 Pod 之間的連接仍透過一般的容器網路介面(CNI)進行
---

### Big Picture
1. Install Slinky Custom Resource Definitions (CRDs)
2. Add/Delete/Update Slinky Custom Resource (CR)
3. Network Communication
### 整體架構概覽
1. 安裝 Slinky 自訂資源定義(CRDs)
2. 新增/刪除/更新 Slinky 自訂資源(CR)
3. 網路通訊
---

### Slurm Operator
1. User installs Slinky CRs
2. Cluster Controller creates Slurm Client from Cluster CR
3. Slurm Client starts informer to poll Slurm resources
4. NodeSet Controller creates NodeSet Pods from NodeSet CR
- The slurmd registers to slurmctld on startup
5. NodeSet Controller terminates NodeSet Pod after fully draining Slurm node
- NodeSet Pod deletes itself from Slurm on preStop
### Slurm Operator
1. 使用者安裝 Slinky 自訂資源(CRs)
2. Cluster Controller 根據 Cluster CR 建立 Slurm Client
3. Slurm Client 啟動 informer,輪詢 Slurm 資源狀態
4. NodeSet Controller 根據 NodeSet CR 建立 NodeSet Pod(slurmd)
* slurmd 在啟動時向 slurmctld 註冊
5. NodeSet Controller 在 Slurm 節點完全釋放後終止 NodeSet Pod
* NodeSet Pod 在 preStop 階段會從 Slurm 中刪除自己
---
### Slurm Cluster Scaling

### Auto-Scale NodeSet
1. Metrics are gathered and exported.
2. HPA scales CR replicas based on read metrics and defined policy.
3. Slurm-Operator reconciles CR changes, scaling in or out NodeSet Pods.
### NodeSet 自動擴縮
1. 收集並匯出指標(Metrics)。
2. HPA 根據讀取的指標與定義的政策調整 CR 副本數(replicas)。
3. Slurm Operator 調和 CR 變更,對 NodeSet Pod 進行擴展或縮減。
---
### Demo Screenshots


---
### Future Work
- Slurm scheduler component
- Slurm finer-grained management of kubelet resource allocations (e.g. CPUs, GPUs, Core pinning)
- Current Kubernetes cannot mix pinned and unpinned cores, let alone more complex versions of core assignment
- Increase pluggable infrastructure of Kubernetes - current CPU and memory manager leaves much to be desired
- Network Topology Aware Scheduling in Slurm
- Using NFD combined with Slurm internals
- Add Slurm scheduling extension to handle resource scheduling for the cluster
- Map current scheduling concepts not in Slurm, e.g. affinity/anti-affinity
### 未來規劃
* Slurm 排程器元件開發
* Slurm 對 kubelet 資源配置的更細粒度管理(例如 CPU、GPU、核心鎖定)
* 目前 Kubernetes 不支援混合鎖定(pinned)與非鎖定核心,更別說更複雜的核心分配方式
* 強化 Kubernetes 可擴充的基礎設施,目前 CPU 和記憶體管理仍有改進空間
* Slurm 支援網路拓撲感知排程
* 結合節點功能發現(NFD)與 Slurm 內部機制實現
* 擴充 Slurm 排程功能以處理整個叢集的資源排程
* 映射現有 Slurm 尚未涵蓋的排程概念,例如親和性(affinity)與反親和性(anti-affinity)
---
### Questions?
---
### Extended Reading
### Use Cases - Immediate
- Ephemeral Slurm Clusters in the Cloud
- Consistent user experience regardless of cloud vendor
- Easy to plug in underlying infrastructure and just work
- Running traditional HPC workloads without needing to translate into Kubernetes pods
- Currently, many workloads in this space, including: weather; genomics; scientific computing
- Fine grained resource allocation and management
- Efficient execution of multi-node workloads
- E.g., AI/ML Training
Initial Slinky demo demonstrates these use cases by running an AI Benchmark on an ephemeral
Slurm cluster
### 立即使用案例
* 雲端短暫性 Slurm 叢集
* 無論雲端供應商為何,使用者體驗一致
* 容易整合底層基礎設施並即刻運作
* 執行傳統 HPC 工作負載,無需轉換成 Kubernetes Pod
* 目前此類工作負載包括:氣象模擬、基因組學、科學計算等
* 細粒度的資源分配與管理
* 多節點工作負載的高效執行
* 例如:AI/機器學習訓練
Slinky 初期示範即透過在短暫 Slurm 叢集上執行 AI 基準測試來展示這些使用案例
---
### Use Cases - Immediate
- For a hybrid compute environment, coordinate workloads running in Kubernetes and Slurm to allow for efficient sharing of resources
- Intended approach is to provide a Kubernetes scheduling plugin that defers scheduling decisions to Slurm, allowing Slurm to have a complete view of both K8s and Slurm workloads
### 立即使用案例
* 在混合計算環境中,協調 Kubernetes 與 Slurm 內同時執行的工作負載,以達成資源的有效共享
* 預期方法是提供 Kubernetes 排程插件,將排程決策延遲(defer)給 Slurm,由 Slurm 完整掌握 Kubernetes 與 Slurm 的所有工作負載狀況
---
### Use Cases - Future
- Schedule AI/ML Training, Single and Multi-node Inference in Kubernetes Clusters with minimal translation
- Longer-term, support training operations in a Cloud-Native environment
- Key obstacles:
- fine-grained native resource allocation and management
- fine-grained accelerator allocation and management
- DRA headed in this direction
- Optimal resource use
- Bin packing - maximize utilization of node resources
- CPU Affinity management - avoid conflicts between pods
### 未來使用案例
* 在 Kubernetes 叢集中,以最小化轉換的方式排程 AI/ML 訓練及單節點、多節點推論
* 長期目標是在雲原生環境中支援訓練作業
* 主要挑戰包括:
* 細粒度的原生資源配置與管理
* 細粒度的加速器資源分配與管理
* 動態資源分配(DRA)朝此方向發展
* 資源最佳化使用
* 容器打包(Bin packing)— 最大化節點資源利用率
* CPU 親和性管理 — 避免 Pod 之間的衝突
---

### Slurm Daemons
- Slurmctld
- Slurm Control-Plane
- Slurm API
- Slurm Daemon
- Client Commands
- Slurmd
- Slurm Compute Node Agent
- Slurmstepd
- Slurm Job Agent
- Slurmrestd
- Slurm REST API
- Slurmdbd
- Slurm Database Agent
- Sackd
- Slurm Auth/Cred Agent
### Slurm 守護程序 (Daemons)
* **slurmctld**
* Slurm 控制平面(Control-Plane)
* 提供 Slurm API
* Slurm 守護程序
* 用戶端命令(Client Commands)
* **slurmd**
* Slurm 計算節點代理
* **slurmstepd**
* Slurm 工作代理
* **slurmrestd**
* Slurm REST API
* **slurmdbd**
* Slurm 資料庫代理
* **sacctd**
* Slurm 認證與憑證代理
---

### Jobs
1. User can be authenticated with Slurm
2. User submits a Slurm job.
3. Job runs until completion.
### 工作流程 (Jobs)
1. 使用者可以透過 Slurm 進行身份驗證
2. 使用者提交 Slurm 工作
3. 工作執行直到完成
---

### Slurm: Kubernetes + non-Kubernetes
1. References a resource
2. Network Communication
- Slurm components (e.g. slurmctld, slurmd, slurmrestd, slurmdbd) can reside anywhere
- Kubernetes
- Bare-metal
- Virtual Machine
- Communication is key!
### Slurm:結合 Kubernetes 與非 Kubernetes
1. 參考資源
2. 網路通訊
* Slurm 元件(例如 slurmctld、slurmd、slurmrestd、slurmdbd)可以部署在任何位置
* Kubernetes
* 裸機伺服器
* 虛擬機器
* 通訊是關鍵!
---
### Slurm Helm Chart

<br>
{%hackmd vaaMgNRPS4KGJDSFG0ZE0w %}