# Replica 1 Non Resilient Pools ## Why Ceph Replica 1 Non Resilient Pools There are applications that manage their resiliency on the application level. These applications aren’t concerned with data loss since they have multiple instances of the application running on top of independent replicas of the data at the application layer. So Replica 1 Non Resilient pools provide a storage solution that offers Ceph replica 1 for RBD, avoiding unnecessary replication when not required by the application. So if you have an application where data resiliency is managed by your application, you can try ceph replica 1 to avoid unnecesssary replication of data. ## Important **For this to work with Local Storage Operator, We need to have at least one additional usable disks on each node of the cluster. Otherwise after the patch for the feature the replica-1 osds will not come up & storagecluster will forever be in progressing.** ## Setup Wait for the storagecluster to be ready ``` ~ % oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 7m30s Ready 2024-02-05T13:54:15Z 4.15.0 ``` then Patch the storagecluster to enable Non-Resilient pools ``` ~ % oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]' ``` After patching storagecluster would be progressing again, Wait for the storagecluster to reach the ready stage. ``` ~ % oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.15.0 ``` ## Inner Details ### CephBlockPools ``` ~ % oc get cephblockpools NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready ``` The 1st one is the default cephblockpool with replica 3, The 3 other pools with the failure domain names at the end are the replica-1 cephblockpools. One replica-1 cephblockpool each is created for each failure domain. ### Storageclasses ``` ~ % oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m ``` ocs-storagecluster-ceph-non-resilient-rbd is the new storageclass for non-resilient topology constrained pools ``` ~ % oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: description: 'Ceph Non Resilient Pools : Provides RWO Filesystem volumes, and RWO and RWX Block volumes' creationTimestamp: "2022-09-08T15:50:31Z" name: ocs-storagecluster-ceph-non-resilient-rbd resourceVersion: "69004" uid: e59faabe-688a-45fb-bc42-dccac72a4efd parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff imageFormat: "2" pool: ocs-storagecluster-cephblockpool topologyConstrainedPools: |- [ { "poolName": "ocs-storagecluster-cephblockpool-us-east-1b", "domainSegments": [ { "domainLabel": "zone", "value": "us-east-1b" } ] }, { "poolName": "ocs-storagecluster-cephblockpool-us-east-1c", "domainSegments": [ { "domainLabel": "zone", "value": "us-east-1c" } ] }, { "poolName": "ocs-storagecluster-cephblockpool-us-east-1a", "domainSegments": [ { "domainLabel": "zone", "value": "us-east-1a" } ] } ] provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer ``` ### OSD Pods ``` ~ % oc get pods | grep osd rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m rook-ceph-osd-prepare-us-east-1a-data-0l874p-2tc99 0/1 Completed 0 5m44s rook-ceph-osd-prepare-us-east-1b-data-0q7t7h-swgzk 0/1 Completed 0 5m43s rook-ceph-osd-prepare-us-east-1c-data-07nv5w-77mmw 0/1 Completed 0 5m44s ``` We can see after patching for replica-1 there are 3 new osd-prepare pods for replica-1 & 3 new osds after that. ### Ceph details ``` sh-5.1$ ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 12.00000 - 12 TiB 302 MiB 199 MiB 0 B 103 MiB 12 TiB 0.00 1.00 - root default -5 12.00000 - 12 TiB 302 MiB 199 MiB 0 B 103 MiB 12 TiB 0.00 1.00 - region us-east-1 -14 4.00000 - 4 TiB 100 MiB 66 MiB 0 B 34 MiB 4.0 TiB 0.00 1.00 - zone us-east-1a -13 2.00000 - 2 TiB 58 MiB 32 MiB 0 B 25 MiB 2.0 TiB 0.00 1.15 - host ocs-deviceset-gp3-csi-2-data-05ggvh 2 ssd 2.00000 1.00000 2 TiB 58 MiB 32 MiB 0 B 25 MiB 2.0 TiB 0.00 1.15 52 up osd.2 -41 2.00000 - 2 TiB 43 MiB 33 MiB 0 B 9.1 MiB 2.0 TiB 0.00 0.85 - host us-east-1a-data-0knkjg 3 us-east-1a 2.00000 1.00000 2 TiB 43 MiB 33 MiB 0 B 9.1 MiB 2.0 TiB 0.00 0.85 61 up osd.3 -4 4.00000 - 4 TiB 101 MiB 66 MiB 0 B 34 MiB 4.0 TiB 0.00 1.00 - zone us-east-1b -3 2.00000 - 2 TiB 71 MiB 46 MiB 0 B 26 MiB 2.0 TiB 0.00 1.42 - host ocs-deviceset-gp3-csi-1-data-0d98jz 0 ssd 2.00000 1.00000 2 TiB 71 MiB 46 MiB 0 B 26 MiB 2.0 TiB 0.00 1.42 57 up osd.0 -51 2.00000 - 2 TiB 29 MiB 21 MiB 0 B 8.6 MiB 2.0 TiB 0.00 0.58 - host us-east-1b-data-0jp5x5 5 us-east-1b 2.00000 1.00000 2 TiB 29 MiB 21 MiB 0 B 8.6 MiB 2.0 TiB 0.00 0.58 57 up osd.5 -10 4.00000 - 4 TiB 101 MiB 67 MiB 0 B 34 MiB 4.0 TiB 0.00 1.01 - zone us-east-1c -9 2.00000 - 2 TiB 64 MiB 39 MiB 0 B 25 MiB 2.0 TiB 0.00 1.27 - host ocs-deviceset-gp3-csi-0-data-0mgxqb 1 ssd 2.00000 1.00000 2 TiB 64 MiB 39 MiB 0 B 25 MiB 2.0 TiB 0.00 1.27 57 up osd.1 -46 2.00000 - 2 TiB 37 MiB 28 MiB 0 B 8.9 MiB 2.0 TiB 0.00 0.74 - host us-east-1c-data-0lxvf6 4 us-east-1c 2.00000 1.00000 2 TiB 37 MiB 28 MiB 0 B 8.9 MiB 2.0 TiB 0.00 0.74 58 up osd.4 TOTAL 12 TiB 302 MiB 199 MiB 0 B 103 MiB 12 TiB 0.00 MIN/MAX VAR: 0.58/1.42 STDDEV: 0 ``` ``` sh-5.1$ ceph osd pool ls detail pool 1 'ocs-storagecluster-cephblockpool' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 74 lfor 0/0/35 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_ratio 0.49 application rbd pool 2 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 19 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr pool 3 'ocs-storagecluster-cephfilesystem-metadata' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 75 lfor 0/0/35 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs pool 4 'ocs-storagecluster-cephfilesystem-data0' replicated size 3 min_size 2 crush_rule 6 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 76 lfor 0/0/37 flags hashpspool stripe_width 0 target_size_ratio 0.49 application cephfs pool 5 'ocs-storagecluster-cephblockpool-us-east-1a' replicated size 1 min_size 1 crush_rule 8 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 115 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 6 'ocs-storagecluster-cephblockpool-us-east-1b' replicated size 1 min_size 1 crush_rule 10 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 101 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 7 'ocs-storagecluster-cephblockpool-us-east-1c' replicated size 1 min_size 1 crush_rule 12 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 110 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd ``` ## Testing the non-resilient storageclass topology aware provisioning #### Prepare a test Namespace ``` ~ % oc create ns test ~ % oc project test ~ % oc create sa test ~ % oc adm policy add-scc-to-user -z test privileged ``` #### Create a pvc ``` ~ % cat <<EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: non-resilient-rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-non-resilient-rbd EOF ``` The pvc would be in pending state, waiting for a consumer ``` ~ % oc get pvc | grep non-resilient-rbd-pvc non-resilient-rbd-pvc Pending ocs-storagecluster-ceph-non-resilient-rbd 16s ``` #### Create a pod to consume the pvc ``` ~ % cat <<EOF | oc create -f - apiVersion: v1 kind: Pod metadata: name: task-pv-pod spec: nodeSelector: #Change according to failure domain topology.kubernetes.io/zone: us-east-1a volumes: - name: task-pv-storage persistentVolumeClaim: claimName: non-resilient-rbd-pvc containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage securityContext: privileged: true EOF ``` ``` ~ % oc get pods | grep task task-pv-pod 1/1 Running 0 57s ``` #### Check the pvc status now & the newly created pv also ``` ~ % oc get pvc | grep non-resilient-rbd-pvc non-resilient-rbd-pvc Bound pvc-d1a26969-a15c-4e60-8f76-847d1fdd6041 1Gi RWO ocs-storagecluster-ceph-non-resilient-rbd 3m49s ~ % oc get pv | grep non-resilient-rbd-pvc pvc-d1a26969-a15c-4e60-8f76-847d1fdd6041 1Gi RWO Delete Bound openshift-storage/non-resilient-rbd-pvc ocs-storagecluster-ceph-non-resilient-rbd 2m18s ``` #### Check the toolbox pod to see rbd image for new pv is in which pool ``` sh-4.4$ rbd ls ocs-storagecluster-cephblockpool-us-east-1a csi-vol-937ee9e9-2f99-11ed-b2d0-0a580a83001a ``` #### Match this imageName to the imageName in the pv created above ``` ~ % oc get pv pvc-d1a26969-a15c-4e60-8f76-847d1fdd6041 -o=jsonpath='{.spec.csi.volumeAttributes}' | jq { "clusterID": "openshift-storage", "imageFeatures": "layering,deep-flatten,exclusive-lock,object-map,fast-diff", "imageFormat": "2", "imageName": "csi-vol-937ee9e9-2f99-11ed-b2d0-0a580a83001a", "journalPool": "ocs-storagecluster-cephblockpool", "pool": "ocs-storagecluster-cephblockpool-us-east-1a", "storage.kubernetes.io/csiProvisionerIdentity": "1662652453217-8081-openshift-storage.rbd.csi.ceph.com", "topologyConstrainedPools": "[\n {\n \"poolName\": \"ocs-storagecluster-cephblockpool-us-east-1a\",\n \"domainSegments\": [\n {\n \"domainLabel\": \"zone\",\n \"value\": \"us-east-1a\"\n }\n ]\n },\n {\n \"poolName\": \"ocs-storagecluster-cephblockpool-us-east-1b\",\n \"domainSegments\": [\n {\n \"domainLabel\": \"zone\",\n \"value\": \"us-east-1b\"\n }\n ]\n },\n {\n \"poolName\": \"ocs-storagecluster-cephblockpool-us-east-1c\",\n \"domainSegments\": [\n {\n \"domainLabel\": \"zone\",\n \"value\": \"us-east-1c\"\n }\n ]\n }\n]" } ``` ## Scaling up the number of non-resilient osds By default 1 osd for each failure domain is added. Patch the storagecluster to increase the number of on-resilent osds per failure domain Replace 2 in the below patch to the desired number ``` ~ $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/count", "value": 2 }]' ``` Storagecluster will get into progressing state, wait for it to be ready ``` ~ $ oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 76m Ready 2024-02-19T05:21:07Z 4.15.0 ``` ### New OSD pods ``` ~ $ oc get pods | grep osd rook-ceph-osd-0-df7f9c69c-dqwqt 2/2 Running 0 70m rook-ceph-osd-1-cdc97fd77-7xwmm 2/2 Running 0 70m rook-ceph-osd-2-6d559549b9-hg7nx 2/2 Running 0 70m rook-ceph-osd-3-786c94d5d4-x4kc5 2/2 Running 0 60m rook-ceph-osd-4-58d97f6b77-gz8nx 2/2 Running 0 60m rook-ceph-osd-5-5d8db7597c-6gbxm 2/2 Running 0 60m rook-ceph-osd-6-7fc87ffdc7-c5qfz 2/2 Running 0 8m42s rook-ceph-osd-7-868c5f4c68-t7pbn 2/2 Running 0 8m38s rook-ceph-osd-8-f94985485-grbvx 2/2 Running 0 8m36s rook-ceph-osd-prepare-6517706ddfc97525f6914e030258209e-rzvzv 0/1 Completed 0 70m rook-ceph-osd-prepare-c11cfb23602a82cdc79e25b83369cc54-479kf 0/1 Completed 0 70m rook-ceph-osd-prepare-eb4107c1221a8047a7746be1c341ecc2-rrtrd 0/1 Completed 0 70m rook-ceph-osd-prepare-us-east-1a-data-0knkjg-gb7bq 0/1 Completed 0 60m rook-ceph-osd-prepare-us-east-1a-data-1v5fwd-45ls9 0/1 Completed 0 8m57s rook-ceph-osd-prepare-us-east-1b-data-0jp5x5-gb5qz 0/1 Completed 0 60m rook-ceph-osd-prepare-us-east-1b-data-1hfftf-x5b4g 0/1 Completed 0 8m56s rook-ceph-osd-prepare-us-east-1c-data-0lxvf6-qpw6k 0/1 Completed 0 60m rook-ceph-osd-prepare-us-east-1c-data-1rm7lr-msb6h 0/1 Completed 0 8m56s ``` ### ceph details ``` sh-5.1$ ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 18.00000 - 18 TiB 430 MiB 274 MiB 0 B 155 MiB 18 TiB 0.00 1.00 - root default -5 18.00000 - 18 TiB 430 MiB 274 MiB 0 B 155 MiB 18 TiB 0.00 1.00 - region us-east-1 -14 6.00000 - 6 TiB 149 MiB 91 MiB 0 B 58 MiB 6.0 TiB 0.00 1.04 - zone us-east-1a -13 2.00000 - 2 TiB 48 MiB 15 MiB 0 B 32 MiB 2.0 TiB 0.00 1.00 - host ocs-deviceset-gp3-csi-2-data-05ggvh 2 ssd 2.00000 1.00000 2 TiB 48 MiB 15 MiB 0 B 32 MiB 2.0 TiB 0.00 1.00 33 up osd.2 -41 2.00000 - 2 TiB 53 MiB 37 MiB 0 B 16 MiB 2.0 TiB 0.00 1.11 - host us-east-1a-data-0knkjg 3 us-east-1a 2.00000 1.00000 2 TiB 53 MiB 37 MiB 0 B 16 MiB 2.0 TiB 0.00 1.11 44 up osd.3 -61 2.00000 - 2 TiB 48 MiB 39 MiB 0 B 9.5 MiB 2.0 TiB 0.00 1.01 - host us-east-1a-data-1v5fwd 7 us-east-1a 2.00000 1.00000 2 TiB 48 MiB 39 MiB 0 B 9.5 MiB 2.0 TiB 0.00 1.01 36 up osd.7 -4 6.00000 - 6 TiB 132 MiB 91 MiB 0 B 41 MiB 6.0 TiB 0.00 0.92 - zone us-east-1b -3 2.00000 - 2 TiB 65 MiB 43 MiB 0 B 22 MiB 2.0 TiB 0.00 1.36 - host ocs-deviceset-gp3-csi-1-data-0d98jz 0 ssd 2.00000 1.00000 2 TiB 65 MiB 43 MiB 0 B 22 MiB 2.0 TiB 0.00 1.36 35 up osd.0 -51 2.00000 - 2 TiB 35 MiB 24 MiB 0 B 10 MiB 2.0 TiB 0.00 0.73 - host us-east-1b-data-0jp5x5 5 us-east-1b 2.00000 1.00000 2 TiB 35 MiB 24 MiB 0 B 10 MiB 2.0 TiB 0.00 0.73 40 up osd.5 -56 2.00000 - 2 TiB 32 MiB 24 MiB 0 B 8.4 MiB 2.0 TiB 0.00 0.67 - host us-east-1b-data-1hfftf 6 us-east-1b 2.00000 1.00000 2 TiB 32 MiB 24 MiB 0 B 8.4 MiB 2.0 TiB 0.00 0.67 38 up osd.6 -10 6.00000 - 6 TiB 149 MiB 92 MiB 0 B 57 MiB 6.0 TiB 0.00 1.04 - zone us-east-1c -9 2.00000 - 2 TiB 73 MiB 40 MiB 0 B 32 MiB 2.0 TiB 0.00 1.52 - host ocs-deviceset-gp3-csi-0-data-0mgxqb 1 ssd 2.00000 1.00000 2 TiB 73 MiB 40 MiB 0 B 32 MiB 2.0 TiB 0.00 1.52 38 up osd.1 -46 2.00000 - 2 TiB 33 MiB 18 MiB 0 B 16 MiB 2.0 TiB 0.00 0.70 - host us-east-1c-data-0lxvf6 4 us-east-1c 2.00000 1.00000 2 TiB 33 MiB 18 MiB 0 B 16 MiB 2.0 TiB 0.00 0.70 40 up osd.4 -66 2.00000 - 2 TiB 43 MiB 34 MiB 0 B 8.8 MiB 2.0 TiB 0.00 0.90 - host us-east-1c-data-1rm7lr 8 us-east-1c 2.00000 1.00000 2 TiB 43 MiB 34 MiB 0 B 8.8 MiB 2.0 TiB 0.00 0.90 38 up osd.8 TOTAL 18 TiB 430 MiB 274 MiB 0 B 155 MiB 18 TiB 0.00 MIN/MAX VAR: 0.67/1.52 STDDEV: 0 ``` ## Scaling up the size of the non-resilient osds ### Setting the volumeclaimtemplate for the osds Initially replica-1 osds use the volumeclaimtemplate by copying the one from the first storagedeviceset. to use a diferent volumeclaimtemplate edit the storagecluster to add the spec which looks like below at spec/managedResources/cephNonResilientPools/ ``` volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Ti storageClassName: gp3-csi volumeMode: Block ``` ### Restarted OSD pods ``` ~ $ oc get pods | grep osd rook-ceph-osd-0-94d78d794-7nqhs 2/2 Running 0 28m rook-ceph-osd-1-58c6f87f64-qw95v 2/2 Running 0 28m rook-ceph-osd-2-864df8c857-gdx6q 2/2 Running 0 28m rook-ceph-osd-3-768b79d98f-rk4mb 2/2 Running 0 6m11s rook-ceph-osd-4-868bdd4f4-s8jw2 2/2 Running 0 5m47s rook-ceph-osd-5-56b698b7f4-466vm 2/2 Running 0 5m22s rook-ceph-osd-6-cb6b64845-vljxv 2/2 Running 0 4m57s rook-ceph-osd-7-5dbc69884f-vq99b 2/2 Running 0 4m25s rook-ceph-osd-8-7ccdb745cd-wpfmw 2/2 Running 0 4m rook-ceph-osd-prepare-80058f20e40692a56d589ed4c012ca48-p6zfj 0/1 Completed 0 29m rook-ceph-osd-prepare-9a3c3db3f6bdb7b8d7715be5ebe233e8-jp44m 0/1 Completed 0 29m rook-ceph-osd-prepare-eb1536040cd973c8fda358d13a762cc0-ftscr 0/1 Completed 0 29m rook-ceph-osd-prepare-us-east-1a-data-097ncw-zvzl6 0/1 Completed 0 20m rook-ceph-osd-prepare-us-east-1a-data-15dbxn-6mv2v 0/1 Completed 0 12m rook-ceph-osd-prepare-us-east-1b-data-0qfsfd-5d65f 0/1 Completed 0 20m rook-ceph-osd-prepare-us-east-1b-data-1sfwkr-t5x4s 0/1 Completed 0 12m rook-ceph-osd-prepare-us-east-1c-data-04bdtc-ts5k4 0/1 Completed 0 20m rook-ceph-osd-prepare-us-east-1c-data-19htf4-bz4m6 0/1 Completed 0 12m ``` ### Changed PVC Sizes ``` ~ $ oc get pvc | grep data ocs-deviceset-gp3-csi-0-data-09cpzv Bound pvc-ccfd8303-7d90-4b66-9bd7-dff235e9ffbf 2Ti RWO gp3-csi <unset> 29m ocs-deviceset-gp3-csi-1-data-06hgxl Bound pvc-9e6f7d23-62fa-447d-ae76-241b77d36610 2Ti RWO gp3-csi <unset> 29m ocs-deviceset-gp3-csi-2-data-0rdhrb Bound pvc-a2a02cef-e45e-4d16-aa62-e9added889a2 2Ti RWO gp3-csi <unset> 29m us-east-1a-data-097ncw Bound pvc-d7c49c55-4655-4ecd-b4af-e764cfbbe048 3Ti RWO gp3-csi <unset> 20m us-east-1a-data-15dbxn Bound pvc-7566049f-34a5-4a10-970f-69e59130d363 3Ti RWO gp3-csi <unset> 12m us-east-1b-data-0qfsfd Bound pvc-495abdac-20dd-4ec8-b118-bea24c682d7d 3Ti RWO gp3-csi <unset> 20m us-east-1b-data-1sfwkr Bound pvc-2da42b5e-08c6-4ad6-8b68-031c3b879242 3Ti RWO gp3-csi <unset> 12m us-east-1c-data-04bdtc Bound pvc-f801545a-f6b3-4b4a-874c-6d0a7a3450e9 3Ti RWO gp3-csi <unset> 20m us-east-1c-data-19htf4 Bound pvc-31d9af18-7bff-4306-91e5-7e0f8dc53394 3Ti RWO gp3-csi <unset> 12m ``` ### Ceph details ``` sh-5.1$ ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 18.00000 - 23 TiB 5.0 TiB 237 MiB 41 KiB 139 MiB 18 TiB 21.74 1.00 - root default -5 18.00000 - 23 TiB 5.0 TiB 237 MiB 41 KiB 139 MiB 18 TiB 21.74 1.00 - region us-east-1 -14 6.00000 - 8 TiB 2.0 TiB 78 MiB 14 KiB 46 MiB 6.0 TiB 25.00 1.15 - zone us-east-1a -13 2.00000 - 2 TiB 53 MiB 19 MiB 0 B 34 MiB 2.0 TiB 0.00 0 - host ocs-deviceset-gp3-csi-2-data-0rdhrb 2 ssd 2.00000 1.00000 2 TiB 53 MiB 19 MiB 0 B 34 MiB 2.0 TiB 0.00 0 32 up osd.2 -51 2.00000 - 3 TiB 1.0 TiB 20 MiB 4 KiB 6.2 MiB 2.0 TiB 33.33 1.53 - host us-east-1a-data-097ncw 5 us-east-1a 2.00000 1.00000 3 TiB 1.0 TiB 20 MiB 4 KiB 6.2 MiB 2.0 TiB 33.33 1.53 43 up osd.5 -66 2.00000 - 3 TiB 1.0 TiB 39 MiB 10 KiB 6.1 MiB 2.0 TiB 33.33 1.53 - host us-east-1a-data-15dbxn 8 us-east-1a 2.00000 1.00000 3 TiB 1.0 TiB 39 MiB 10 KiB 6.1 MiB 2.0 TiB 33.33 1.53 38 up osd.8 -4 6.00000 - 8 TiB 2.0 TiB 79 MiB 12 KiB 46 MiB 6.0 TiB 25.00 1.15 - zone us-east-1b -3 2.00000 - 2 TiB 57 MiB 23 MiB 0 B 34 MiB 2.0 TiB 0.00 0 - host ocs-deviceset-gp3-csi-1-data-06hgxl 1 ssd 2.00000 1.00000 2 TiB 57 MiB 23 MiB 0 B 34 MiB 2.0 TiB 0.00 0 37 up osd.1 -46 2.00000 - 3 TiB 1.0 TiB 26 MiB 4 KiB 6.2 MiB 2.0 TiB 33.33 1.53 - host us-east-1b-data-0qfsfd 4 us-east-1b 2.00000 1.00000 3 TiB 1.0 TiB 26 MiB 4 KiB 6.2 MiB 2.0 TiB 33.33 1.53 37 up osd.4 -61 2.00000 - 3 TiB 1.0 TiB 30 MiB 8 KiB 6.2 MiB 2.0 TiB 33.33 1.53 - host us-east-1b-data-1sfwkr 7 us-east-1b 2.00000 1.00000 3 TiB 1.0 TiB 30 MiB 8 KiB 6.2 MiB 2.0 TiB 33.33 1.53 40 up osd.7 -10 6.00000 - 7 TiB 1.0 TiB 79 MiB 15 KiB 47 MiB 6.0 TiB 14.29 0.66 - zone us-east-1c -9 2.00000 - 2 TiB 46 MiB 13 MiB 0 B 33 MiB 2.0 TiB 0.00 0 - host ocs-deviceset-gp3-csi-0-data-09cpzv 0 ssd 2.00000 1.00000 2 TiB 46 MiB 13 MiB 0 B 33 MiB 2.0 TiB 0.00 0 35 up osd.0 -41 2.00000 - 2 TiB 47 MiB 40 MiB 11 KiB 6.7 MiB 2.0 TiB 0.00 0 - host us-east-1c-data-04bdtc 3 us-east-1c 2.00000 1.00000 2 TiB 47 MiB 40 MiB 11 KiB 6.7 MiB 2.0 TiB 0.00 0 37 up osd.3 -56 2.00000 - 3 TiB 1.0 TiB 27 MiB 4 KiB 6.5 MiB 2.0 TiB 33.33 1.53 - host us-east-1c-data-19htf4 6 us-east-1c 2.00000 1.00000 3 TiB 1.0 TiB 27 MiB 4 KiB 6.5 MiB 2.0 TiB 33.33 1.53 43 up osd.6 TOTAL 23 TiB 5.0 TiB 237 MiB 44 KiB 139 MiB 18 TiB 21.74 MIN/MAX VAR: 0/1.53 STDDEV: 16.87 ```