[指令] kubectl get
===
###### tags: `K8s / commands`
###### tags: `Kubernetes`, `k8s`, `kubectl`, `kubectl get`
<br>
[TOC]
<br>
## `kubectl get --raw`
### **step-by-step 範例**
- ### Step1: `/apis`
```
$ kubectl get --raw '/apis' | jq
...
{
"name": "slinky.slurm.net",
"versions": [
{
"groupVersion": "slinky.slurm.net/v1beta1",
"version": "v1beta1"
}
],
"preferredVersion": {
"groupVersion": "slinky.slurm.net/v1beta1",
"version": "v1beta1"
}
},
...
```
- ### Step2: `/apis/<api-group>`
```
$ kubectl get --raw '/apis/slinky.slurm.net' | jq
{
"kind": "APIGroup",
"apiVersion": "v1",
"name": "slinky.slurm.net",
"versions": [
{
"groupVersion": "slinky.slurm.net/v1beta1",
"version": "v1beta1"
}
],
"preferredVersion": {
"groupVersion": "slinky.slurm.net/v1beta1",
"version": "v1beta1"
}
}
```
- ### Step3: `/apis/<api-group>/<api-version>`
```
$ kubectl get --raw '/apis/slinky.slurm.net/v1beta1' | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "slinky.slurm.net/v1beta1",
"resources": [
...
{
"name": "nodesets/scale",
"singularName": "",
"namespaced": true,
"group": "autoscaling",
"version": "v1",
"kind": "Scale",
"verbs": [
"get",
"patch",
"update"
]
},
...
]
}
```
- ### Step4: `/apis/<api-group>/<api-version>/namespaces/<namespace>/<resource-plural>/<resource-name>`
```
$ kubectl get --raw '/apis/slinky.slurm.net/v1beta1/namespaces/slurm/nodesets/slurm-worker-gpu1080' | jq
```
- ### Step5: `/apis/<api-group>/<api-version>/namespaces/<namespace>/<resource-plural>/<resource-name>/<subresource-name>`
```
kubectl get --raw "/apis/slinky.slurm.net/v1beta1/namespaces/slurm/nodesets/slurm-worker-gpu1080/scale" | jq
{
"kind": "Scale",
"apiVersion": "autoscaling/v1",
"metadata": {
"name": "slurm-worker-gpu1080",
"namespace": "slurm",
"uid": "83e0e85e-8758-4157-b0c5-de928faead60",
"resourceVersion": "25964227",
"creationTimestamp": "2025-11-21T10:31:39Z"
},
"spec": {},
"status": {
"replicas": 0,
"selector": "app.kubernetes.io/instance=slurm-worker-gpu1080,app.kubernetes.io/name=slurmd"
}
}
```
<br>
---
<br>
## `kubectl get -f <file-name>`
### 範例:`pvc.yaml` 包含 pvc & pv,要如何只撈出 pv ?
```bash
pv_names=$(kubectl get -f "pvc.yaml" \
-o jsonpath='{.items[?(@.kind=="PersistentVolume")].metadata.name}' 2>/dev/null || true)
```
- ### 說明
> by gpt-5.2 (2025/12/22)
**從 `pvc.yaml` 這個檔案裡(或該檔案套用到叢集後的資源)把所有 `kind: PersistentVolume` 的名稱抓出來,存到變數 `pv_names`**,而且就算抓不到也不要讓腳本失敗。
### 逐段拆開看:
* `kubectl get -f "pvc.yaml"`
讀取 `pvc.yaml` 內定義的 K8s 資源(可能包含 PV/PVC/StorageClass…)。如果這些資源已存在叢集,`kubectl get` 會回傳它們的狀態;若不存在,會報錯。
* `-o jsonpath='{.items[?(@.kind=="PersistentVolume")].metadata.name}'`
用 JSONPath 從回傳結果的 `.items[]` 內 **篩選 `kind == "PersistentVolume"`** 的項目,取它們的 `.metadata.name`。
→ 結果通常是一串 PV 名稱(以空白分隔),例如:`pv-a pv-b pv-c`
* `2>/dev/null`
把錯誤輸出丟掉(例如檔案內沒有資源、資源不存在、RBAC 沒權限等),避免腳本噴錯訊息。
* `|| true`
就算 `kubectl get` 失敗,整個指令也回傳成功(exit code 0),讓腳本可以繼續跑。
### 常見用法情境:
* 後面要對抓到的 PV 做清理或檢查,例如:
* `kubectl delete pv $pv_names`
* `kubectl patch pv ...`
* 針對 reclaim policy / finalizer 做處理
### 小提醒:
* 如果 `pvc.yaml` 其實只有 PVC(沒有 PV),那 `pv_names` 會是空的。
* 很多時候 PV 不會寫在同一個 YAML(尤其是動態供應的情況),這時用這段抓不到 PV 是正常的。
<br>
---
<br>
## `kubectl get pod`
> 查詢 pod (帶有 prefix=`pod/`)
```
$ kubectl get pod --show-kind
```
- **輸出清單時,帶有 prefix `pod/`**
- **沒有 `--show-kind` 參數**
```
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-world-57959d477f-cbgpz 1/1 Running 0 3d17h
```
- **有 `--show-kind` 參數**
```
$ kubectl get pod --show-kind
NAME READY STATUS RESTARTS AGE
pod/hello-world-57959d477f-cbgpz 1/1 Running 0 3d17h
```
<br>
## `kubectl get crd`
> 查詢每個客製化資源(CRD)的版本
`$ kubectl get crd -o=custom-columns='NAME:.metadata.name,STORED_VERSION:.status.storedVersions,SERVED_VERSIONS:.spec.versions[*].name'`
<br>
## `kubectl get events`
```
kubectl get events --sort-by='.lastTimestamp'
```
<br>
## `kubectl get apiservice`
```
$ kubectl get apiservice -l app.kubernetes.io/instance=keda
NAME SERVICE AVAILABLE AGE
v1. Local True 124d
v1.acme.cert-manager.io Local True 123d
v1.admissionregistration.k8s.io Local True 124d
v1.apiextensions.k8s.io Local True 124d
v1.apps Local True 124d
v1.authentication.k8s.io Local True 124d
v1.authorization.k8s.io Local True 124d
v1.autoscaling Local True 124d
v1.batch Local True 124d
v1.cert-manager.io Local True 123d
v1.certificates.k8s.io Local True 124d
v1.coordination.k8s.io Local True 124d
v1.discovery.k8s.io Local True 124d
v1.events.k8s.io Local True 124d
v1.flowcontrol.apiserver.k8s.io Local True 124d
v1.monitoring.coreos.com Local True 123d
v1.networking.k8s.io Local True 124d
v1.node.k8s.io Local True 124d
v1.nvidia.com Local True 124d
v1.policy Local True 124d
v1.rbac.authorization.k8s.io Local True 124d
v1.scheduling.k8s.io Local True 124d
v1.storage.k8s.io Local True 124d
v1alpha1.eventing.keda.sh Local True 116d
v1alpha1.k8s.mariadb.com Local True 17d
v1alpha1.keda.sh Local True 116d
v1alpha1.monitoring.coreos.com Local True 123d
v1alpha1.nfd.k8s-sigs.io Local True 124d
v1alpha1.nvidia.com Local True 124d
v1beta1.external.metrics.k8s.io keda/keda-operator-metrics-apiserver True 116d
v1beta1.slinky.slurm.net Local True 2d23h
v1beta3.flowcontrol.apiserver.k8s.io Local True 124d
v2.autoscaling Local True 124d
```
<br>
{%hackmd vaaMgNRPS4KGJDSFG0ZE0w %}