# Testing for replica-1 deviceclass BZ
### Case 1: Cluster was created with old code, then upgraded to new code
#### Old code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
}
]
// Deviceclass on cephblockpool is empty
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
```
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
nodeSelector:
#Change according to failure domain
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: rbd-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
securityContext:
privileged: true
EOF
~ $ oc rsh task-pv-pod
# cd /usr/share/nginx/html
# ls -lh
total 16K
drwxrws---. 2 root 1000690000 16K Jun 11 09:34 lost+found
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file.txt
# ls -lh
total 9.5G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 09:44 file.txt
drwxrws---. 2 root 1000690000 16K Jun 11 09:34 lost+found
```
```
// Data distribution looks right
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 6.00000 - 6 TiB 29 GiB 28 GiB 0 B 343 MiB 6.0 TiB 0.47 1.00 - root default
-5 6.00000 - 6 TiB 29 GiB 28 GiB 0 B 343 MiB 6.0 TiB 0.47 1.00 - region us-east-1
-14 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 - zone us-east-1a
-13 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 - host ocs-deviceset-gp3-csi-1-data-04j76w
2 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 113 up osd.2
-4 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 - zone us-east-1b
-3 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 - host ocs-deviceset-gp3-csi-2-data-0qlppx
1 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 113 up osd.1
-10 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 - host ocs-deviceset-gp3-csi-0-data-057vqf
0 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 2.0 TiB 0.47 1.00 113 up osd.0
TOTAL 6 TiB 29 GiB 28 GiB 0 B 343 MiB 6.0 TiB 0.47
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
```
#### Upgrade to new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
}
]
// Default deviceclass is now in storagecluster status
~ $ oc get storagecluster ocs-storagecluster -o=jsonpath='{.status.defaultCephDeviceClass}' | jq -R '{defaultCephDeviceClass: .}'
{
"defaultCephDeviceClass": "ssd"
}
// pools are using the defaultcephdeviceclass
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "ssd"
}
```
```
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file2.txt
# ls -lh
total 19G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 09:44 file.txt
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 09:58 file2.txt
drwxrws---. 2 root 1000690000 16K Jun 11 09:34 lost+found
```
```
// Data distribution is right
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 6.00000 - 6 TiB 57 GiB 57 GiB 0 B 514 MiB 5.9 TiB 0.93 1.00 - root default
-5 6.00000 - 6 TiB 57 GiB 57 GiB 0 B 514 MiB 5.9 TiB 0.93 1.00 - region us-east-1
-14 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 - zone us-east-1a
-13 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 - host ocs-deviceset-gp3-csi-1-data-04j76w
2 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 113 up osd.2
-4 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 - zone us-east-1b
-3 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 - host ocs-deviceset-gp3-csi-2-data-0qlppx
1 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 113 up osd.1
-10 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 - host ocs-deviceset-gp3-csi-0-data-057vqf
0 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 171 MiB 2.0 TiB 0.93 1.00 113 up osd.0
TOTAL 6 TiB 57 GiB 57 GiB 0 B 514 MiB 5.9 TiB 0.93
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
```
### Case 2: Cluster was created & replica 1 was enabled on the old code, then upgraded to new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
}
]
// pool has no deviceclass
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
```
#### Enable Replica-1
```
~ $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]'
```
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
},
{
"name": "us-east-1a"
},
{
"name": "us-east-1b"
},
{
"name": "us-east-1c"
}
]
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "replicated"
}
```
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
nodeSelector:
#Change according to failure domain
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: rbd-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
securityContext:
privileged: true
EOF
~ $ oc rsh task-pv-pod
# cd /usr/share/nginx/html
# ls -lh
total 16K
drwxrws---. 2 root 1000690000 16K Jun 11 08:33 lost+found
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file.txt
# ls -lh
total 9.5G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 08:38 file.txt
drwxrws---. 2 root 1000690000 16K Jun 11 08:33 lost+found
```
```
// Problem with data placement is visible,
// data is getting stored onto replica-1 osds.
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 29 GiB 28 GiB 0 B 421 MiB 12 TiB 0.23 1.00 - root default
-5 12.00000 - 12 TiB 29 GiB 28 GiB 0 B 421 MiB 12 TiB 0.23 1.00 - region us-east-1
-14 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 139 MiB 4.0 TiB 0.23 1.00 - zone us-east-1a
-13 2.00000 - 2 TiB 4.1 GiB 4.0 GiB 0 B 105 MiB 2.0 TiB 0.20 0.85 - host ocs-deviceset-gp3-csi-1-data-0krd9w
2 ssd 2.00000 1.00000 2 TiB 4.1 GiB 4.0 GiB 0 B 105 MiB 2.0 TiB 0.20 0.85 51 up osd.2
-51 2.00000 - 2 TiB 5.5 GiB 5.5 GiB 0 B 34 MiB 2.0 TiB 0.27 1.15 - host us-east-1a-data-0hpg59
3 us-east-1a 2.00000 1.00000 2 TiB 5.5 GiB 5.5 GiB 0 B 34 MiB 2.0 TiB 0.27 1.15 78 up osd.3
-4 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 143 MiB 4.0 TiB 0.23 1.00 - zone us-east-1b
-3 2.00000 - 2 TiB 5.4 GiB 5.4 GiB 0 B 60 MiB 2.0 TiB 0.26 1.13 - host ocs-deviceset-gp3-csi-2-data-097ps9
0 ssd 2.00000 1.00000 2 TiB 5.4 GiB 5.4 GiB 0 B 60 MiB 2.0 TiB 0.26 1.13 59 up osd.0
-46 2.00000 - 2 TiB 4.2 GiB 4.1 GiB 0 B 83 MiB 2.0 TiB 0.21 0.87 - host us-east-1b-data-0gq8mm
4 us-east-1b 2.00000 1.00000 2 TiB 4.2 GiB 4.1 GiB 0 B 83 MiB 2.0 TiB 0.21 0.87 70 up osd.4
-10 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 139 MiB 4.0 TiB 0.23 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 4.2 GiB 4.1 GiB 0 B 56 MiB 2.0 TiB 0.20 0.87 - host ocs-deviceset-gp3-csi-0-data-0j97br
1 ssd 2.00000 1.00000 2 TiB 4.2 GiB 4.1 GiB 0 B 56 MiB 2.0 TiB 0.20 0.87 57 up osd.1
-41 2.00000 - 2 TiB 5.4 GiB 5.4 GiB 0 B 83 MiB 2.0 TiB 0.27 1.13 - host us-east-1c-data-0n8c22
5 us-east-1c 2.00000 1.00000 2 TiB 5.4 GiB 5.4 GiB 0 B 83 MiB 2.0 TiB 0.27 1.13 72 up osd.5
TOTAL 12 TiB 29 GiB 28 GiB 0 B 421 MiB 12 TiB 0.23
MIN/MAX VAR: 0.85/1.15 STDDEV: 0.03
```
#### Upgrade to new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
},
{
"name": "us-east-1a"
},
{
"name": "us-east-1b"
},
{
"name": "us-east-1c"
}
]
~ $ oc get storagecluster ocs-storagecluster -o=jsonpath='{.status.defaultCephDeviceClass}' | jq -R '{defaultCephDeviceClass: .}'
{
"defaultCephDeviceClass": "ssd"
}
// pool has deviceclass set
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "ssd"
}
```
```
// After upgrade data has gotten rebalanced & moved to correct osd
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 29 GiB 29 GiB 0 B 411 MiB 12 TiB 0.24 1.00 - root default
-5 12.00000 - 12 TiB 29 GiB 29 GiB 0 B 411 MiB 12 TiB 0.24 1.00 - region us-east-1
-14 4.00000 - 4 TiB 9.8 GiB 9.6 GiB 0 B 149 MiB 4.0 TiB 0.24 1.00 - zone us-east-1a
-13 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 97 MiB 2.0 TiB 0.47 1.96 - host ocs-deviceset-gp3-csi-1-data-0krd9w
2 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 97 MiB 2.0 TiB 0.47 1.96 105 up osd.2
-51 2.00000 - 2 TiB 219 MiB 167 MiB 0 B 52 MiB 2.0 TiB 0.01 0.04 - host us-east-1a-data-0hpg59
3 us-east-1a 2.00000 1.00000 2 TiB 219 MiB 167 MiB 0 B 52 MiB 2.0 TiB 0.01 0.04 23 up osd.3
-4 4.00000 - 4 TiB 9.7 GiB 9.6 GiB 0 B 131 MiB 4.0 TiB 0.24 1.00 - zone us-east-1b
-3 2.00000 - 2 TiB 9.4 GiB 9.3 GiB 0 B 48 MiB 2.0 TiB 0.46 1.92 - host ocs-deviceset-gp3-csi-2-data-097ps9
0 ssd 2.00000 1.00000 2 TiB 9.4 GiB 9.3 GiB 0 B 48 MiB 2.0 TiB 0.46 1.92 107 up osd.0
-46 2.00000 - 2 TiB 354 MiB 271 MiB 0 B 83 MiB 2.0 TiB 0.02 0.07 - host us-east-1b-data-0gq8mm
4 us-east-1b 2.00000 1.00000 2 TiB 354 MiB 271 MiB 0 B 83 MiB 2.0 TiB 0.02 0.07 22 up osd.4
-10 4.00000 - 4 TiB 9.8 GiB 9.7 GiB 0 B 131 MiB 4.0 TiB 0.24 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 9.3 GiB 9.2 GiB 0 B 48 MiB 2.0 TiB 0.45 1.90 - host ocs-deviceset-gp3-csi-0-data-0j97br
1 ssd 2.00000 1.00000 2 TiB 9.3 GiB 9.2 GiB 0 B 48 MiB 2.0 TiB 0.45 1.90 104 up osd.1
-41 2.00000 - 2 TiB 516 MiB 433 MiB 0 B 83 MiB 2.0 TiB 0.02 0.10 - host us-east-1c-data-0n8c22
5 us-east-1c 2.00000 1.00000 2 TiB 516 MiB 433 MiB 0 B 83 MiB 2.0 TiB 0.02 0.10 23 up osd.5
TOTAL 12 TiB 29 GiB 29 GiB 0 B 411 MiB 12 TiB 0.24
MIN/MAX VAR: 0.04/1.96 STDDEV: 0.22
```
```
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file2.txt
# ls -lh
total 19G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 08:38 file.txt
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 08:48 file2.txt
drwxrws---. 2 root 1000690000 16K Jun 11 08:33 lost+found
```
```
// data placement is correct as replica-1 osds are not being used
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 57 GiB 57 GiB 0 B 670 MiB 12 TiB 0.47 1.00 - root default
-5 12.00000 - 12 TiB 57 GiB 57 GiB 0 B 670 MiB 12 TiB 0.47 1.00 - region us-east-1
-14 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 241 MiB 4.0 TiB 0.47 1.00 - zone us-east-1a
-13 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 189 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-1-data-0krd9w
2 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 189 MiB 2.0 TiB 0.93 2.00 106 up osd.2
-51 2.00000 - 2 TiB 55 MiB 3.4 MiB 0 B 52 MiB 2.0 TiB 0.00 0.01 - host us-east-1a-data-0hpg59
3 us-east-1a 2.00000 1.00000 2 TiB 55 MiB 3.4 MiB 0 B 52 MiB 2.0 TiB 0.00 0.01 23 up osd.3
-4 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 223 MiB 4.0 TiB 0.47 1.00 - zone us-east-1b
-3 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 140 MiB 2.0 TiB 0.93 1.99 - host ocs-deviceset-gp3-csi-2-data-097ps9
0 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 140 MiB 2.0 TiB 0.93 1.99 107 up osd.0
-46 2.00000 - 2 TiB 87 MiB 3.3 MiB 0 B 83 MiB 2.0 TiB 0.00 0.01 - host us-east-1b-data-0gq8mm
4 us-east-1b 2.00000 1.00000 2 TiB 87 MiB 3.3 MiB 0 B 83 MiB 2.0 TiB 0.00 0.01 22 up osd.4
-10 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 206 MiB 4.0 TiB 0.47 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 122 MiB 2.0 TiB 0.93 1.99 - host ocs-deviceset-gp3-csi-0-data-0j97br
1 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 122 MiB 2.0 TiB 0.93 1.99 106 up osd.1
-41 2.00000 - 2 TiB 87 MiB 4.0 MiB 0 B 83 MiB 2.0 TiB 0.00 0.01 - host us-east-1c-data-0n8c22
5 us-east-1c 2.00000 1.00000 2 TiB 87 MiB 4.0 MiB 0 B 83 MiB 2.0 TiB 0.00 0.01 23 up osd.5
TOTAL 12 TiB 57 GiB 57 GiB 0 B 670 MiB 12 TiB 0.47
MIN/MAX VAR: 0.01/2.00 STDDEV: 0.46
```
### Case 3: Cluster was created with replica-1 with old code, then upgraded to new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "replicated"
},
{
"name": "us-east-1b"
},
{
"name": "us-east-1c"
},
{
"name": "us-east-1a"
}
]
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "replicated"
}
```
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
nodeSelector:
#Change according to failure domain
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: rbd-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
securityContext:
privileged: true
EOF
~ $ oc rsh task-pv-pod
# cd /usr/share/nginx/html
# ls -lh
total 16K
drwxrws---. 2 root 1000690000 16K Jun 11 07:45 lost+found
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file.txt
# ls -lh
total 9.5G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 07:50 file.txt
drwxrws---. 2 root 1000690000 16K Jun 11 07:45 lost+found
```
```
// Data placement is correct
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 29 GiB 28 GiB 0 B 332 MiB 12 TiB 0.23 1.00 - root default
-8 12.00000 - 12 TiB 29 GiB 28 GiB 0 B 332 MiB 12 TiB 0.23 1.00 - region us-east-1
-42 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 4.0 TiB 0.23 1.00 - zone us-east-1a
-51 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 - host ocs-deviceset-gp3-csi-1-data-0v7tv8
5 replicated 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 3 up osd.5
-41 2.00000 - 2 TiB 28 MiB 1.6 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 - host us-east-1a-data-0b78sd
4 us-east-1a 2.00000 1.00000 2 TiB 28 MiB 1.6 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 18 up osd.4
-7 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 110 MiB 4.0 TiB 0.23 1.00 - zone us-east-1b
-21 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.99 - host ocs-deviceset-gp3-csi-0-data-0n6xh2
0 replicated 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.99 3 up osd.0
-6 2.00000 - 2 TiB 28 MiB 1.5 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 - host us-east-1b-data-0rb5t5
1 us-east-1b 2.00000 1.00000 2 TiB 28 MiB 1.5 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 17 up osd.1
-27 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 110 MiB 4.0 TiB 0.23 1.00 - zone us-east-1c
-26 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.99 - host ocs-deviceset-gp3-csi-2-data-0qrwfk
3 replicated 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.99 2 up osd.3
-36 2.00000 - 2 TiB 27 MiB 1.0 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 - host us-east-1c-data-0qtnv9
2 us-east-1c 2.00000 1.00000 2 TiB 27 MiB 1.0 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 17 up osd.2
TOTAL 12 TiB 29 GiB 28 GiB 0 B 332 MiB 12 TiB 0.23
MIN/MAX VAR: 0.01/1.99 STDDEV: 0.23
```
#### Upgrade to new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "replicated"
},
{
"name": "us-east-1b"
},
{
"name": "us-east-1c"
},
{
"name": "us-east-1a"
}
]
~ $ oc get storagecluster ocs-storagecluster -o=jsonpath='{.status.defaultCephDeviceClass}' | jq -R '{defaultCephDeviceClass: .}'
{
"defaultCephDeviceClass": "replicated"
}
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "replicated"
}
```
```
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file2.txt
# ls -lh
total 19G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 07:50 file.txt
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 08:01 file2.txt
drwxrws---. 2 root 1000690000 16K Jun 11 07:45 lost+found
```
```
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 57 GiB 57 GiB 0 B 552 MiB 12 TiB 0.47 1.00 - root default
-8 12.00000 - 12 TiB 57 GiB 57 GiB 0 B 552 MiB 12 TiB 0.47 1.00 - region us-east-1
-42 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 184 MiB 4.0 TiB 0.47 1.00 - zone us-east-1a
-51 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-1-data-0v7tv8
5 replicated 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 3 up osd.5
-41 2.00000 - 2 TiB 28 MiB 1.9 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 - host us-east-1a-data-0b78sd
4 us-east-1a 2.00000 1.00000 2 TiB 28 MiB 1.9 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 18 up osd.4
-7 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 184 MiB 4.0 TiB 0.47 1.00 - zone us-east-1b
-21 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-0-data-0n6xh2
0 replicated 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 3 up osd.0
-6 2.00000 - 2 TiB 28 MiB 1.8 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 - host us-east-1b-data-0rb5t5
1 us-east-1b 2.00000 1.00000 2 TiB 28 MiB 1.8 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 17 up osd.1
-27 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 184 MiB 4.0 TiB 0.47 1.00 - zone us-east-1c
-26 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-2-data-0qrwfk
3 replicated 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 2 up osd.3
-36 2.00000 - 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 - host us-east-1c-data-0qtnv9
2 us-east-1c 2.00000 1.00000 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 17 up osd.2
TOTAL 12 TiB 57 GiB 57 GiB 0 B 552 MiB 12 TiB 0.47
MIN/MAX VAR: 0.00/2.00 STDDEV: 0.46
```
### Case 4: Cluster gets created & replica-1 gets enabled both on new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
}
]
~ $ oc get storagecluster ocs-storagecluster -o=jsonpath='{.status.defaultCephDeviceClass}' | jq -R '{defaultCephDeviceClass: .}'
{
"defaultCephDeviceClass": "ssd"
}
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "ssd"
}
```
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
nodeSelector:
#Change according to failure domain
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: rbd-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
securityContext:
privileged: true
EOF
~ $ oc rsh task-pv-pod
# cd /usr/share/nginx/html
# ls -lh
total 16K
drwxrws---. 2 root 1000690000 16K Jun 11 07:08 lost+found
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file.txt
# ls -lh
total 9.5G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 07:13 file.txt
drwxrws---. 2 root 1000690000 16K Jun 11 07:08 lost+found
```
```
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 6.00000 - 6 TiB 29 GiB 28 GiB 0 B 258 MiB 6.0 TiB 0.47 1.00 - root default
-5 6.00000 - 6 TiB 29 GiB 28 GiB 0 B 258 MiB 6.0 TiB 0.47 1.00 - region us-east-1
-4 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.00 - zone us-east-1a
-3 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.00 - host ocs-deviceset-gp3-csi-1-data-0lcs5m
0 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.00 4 up osd.0
-14 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.00 - zone us-east-1b
-13 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.00 - host ocs-deviceset-gp3-csi-2-data-0spm25
2 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.00 4 up osd.2
-10 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.00 - host ocs-deviceset-gp3-csi-0-data-0g4l49
1 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 83 MiB 2.0 TiB 0.47 1.00 4 up osd.1
TOTAL 6 TiB 29 GiB 28 GiB 0 B 258 MiB 6.0 TiB 0.47
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
```
#### Enable replica-1
```
~ $ oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]'
```
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "ssd"
},
{
"name": "us-east-1b"
},
{
"name": "us-east-1c"
},
{
"name": "us-east-1a"
}
]
~ $ oc get storagecluster ocs-storagecluster -o=jsonpath='{.status.defaultCephDeviceClass}' | jq -R '{defaultCephDeviceClass: .}'
{
"defaultCephDeviceClass": "ssd"
}
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "ssd"
}
```
```
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file2.txt
# ls -lh
total 19G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 07:13 file.txt
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 07:25 file2.txt
drwxrws---. 2 root 1000690000 16K Jun 11 07:08 lost+found
```
```
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 57 GiB 57 GiB 0 B 552 MiB 12 TiB 0.47 1.00 - root default
-5 12.00000 - 12 TiB 57 GiB 57 GiB 0 B 552 MiB 12 TiB 0.47 1.00 - region us-east-1
-4 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 184 MiB 4.0 TiB 0.47 1.00 - zone us-east-1a
-3 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-1-data-0lcs5m
0 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 4 up osd.0
-46 2.00000 - 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 - host us-east-1a-data-0cvxt8
5 us-east-1a 2.00000 1.00000 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 16 up osd.5
-14 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 184 MiB 4.0 TiB 0.47 1.00 - zone us-east-1b
-13 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-2-data-0spm25
2 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 4 up osd.2
-51 2.00000 - 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 - host us-east-1b-data-0x5hl8
3 us-east-1b 2.00000 1.00000 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 16 up osd.3
-10 4.00000 - 4 TiB 19 GiB 19 GiB 0 B 184 MiB 4.0 TiB 0.47 1.00 - zone us-east-1c
-9 2.00000 - 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 - host ocs-deviceset-gp3-csi-0-data-0g4l49
1 ssd 2.00000 1.00000 2 TiB 19 GiB 19 GiB 0 B 158 MiB 2.0 TiB 0.93 2.00 3 up osd.1
-41 2.00000 - 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 - host us-east-1c-data-0kv4wh
4 us-east-1c 2.00000 1.00000 2 TiB 28 MiB 1.3 MiB 0 B 26 MiB 2.0 TiB 0.00 0.00 17 up osd.4
TOTAL 12 TiB 57 GiB 57 GiB 0 B 552 MiB 12 TiB 0.47
MIN/MAX VAR: 0.00/2.00 STDDEV: 0.46
```
### Case 5: Cluster is created from scratch with replica-1 with new code
```
~ $ oc get cephcluster ocs-storagecluster-cephcluster -o=jsonpath='{.status.storage.deviceClasses}' | jq
[
{
"name": "us-east-1a"
},
{
"name": "ssd"
},
{
"name": "us-east-1c"
},
{
"name": "us-east-1b"
}
]
~ $ oc get storagecluster ocs-storagecluster -o=jsonpath='{.status.defaultCephDeviceClass}' | jq -R '{defaultCephDeviceClass: .}'
{
"defaultCephDeviceClass": "ssd"
}
~ $ oc get cephblockpool ocs-storagecluster-cephblockpool -o=jsonpath='{.spec.deviceClass}' | jq -R '{deviceClass: .}'
{
"deviceClass": "ssd"
}
```
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
nodeSelector:
#Change according to failure domain
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: rbd-pvc
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
securityContext:
privileged: true
EOF
~ $ oc rsh task-pv-pod
# cd /usr/share/nginx/html
# ls -lh
total 16K
drwxrws---. 2 root 1000690000 16K Jun 11 06:43 lost+found
# tr -dc "A-Za-z 0-9" < /dev/urandom | fold -w100|head -n 100000000 >file.txt
# ls -lh
total 9.5G
-rw-r--r--. 1 root 1000690000 9.5G Jun 11 06:49 file.txt
drwxrws---. 2 root 1000690000 16K Jun 11 06:43 lost+found
```
```
sh-4.4$ ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 12.00000 - 12 TiB 29 GiB 28 GiB 0 B 340 MiB 12 TiB 0.23 1.00 - root default
-7 12.00000 - 12 TiB 29 GiB 28 GiB 0 B 340 MiB 12 TiB 0.23 1.00 - region us-east-1
-18 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 4.0 TiB 0.23 1.00 - zone us-east-1a
-29 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 - host ocs-deviceset-gp3-csi-1-data-045pnn
1 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 4 up osd.1
-17 2.00000 - 2 TiB 28 MiB 1.6 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 - host us-east-1a-data-04jv4z
0 us-east-1a 2.00000 1.00000 2 TiB 28 MiB 1.6 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 17 up osd.0
-42 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 4.0 TiB 0.23 1.00 - zone us-east-1b
-51 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 - host ocs-deviceset-gp3-csi-0-data-04mnbr
4 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 4 up osd.4
-41 2.00000 - 2 TiB 27 MiB 1.1 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 - host us-east-1b-data-0tlsk7
5 us-east-1b 2.00000 1.00000 2 TiB 27 MiB 1.1 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 16 up osd.5
-6 4.00000 - 4 TiB 9.6 GiB 9.5 GiB 0 B 114 MiB 4.0 TiB 0.23 1.00 - zone us-east-1c
-25 2.00000 - 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 - host ocs-deviceset-gp3-csi-2-data-07s5kg
3 ssd 2.00000 1.00000 2 TiB 9.6 GiB 9.5 GiB 0 B 87 MiB 2.0 TiB 0.47 1.99 3 up osd.3
-5 2.00000 - 2 TiB 27 MiB 1.1 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 - host us-east-1c-data-0wq4jv
2 us-east-1c 2.00000 1.00000 2 TiB 27 MiB 1.1 MiB 0 B 26 MiB 2.0 TiB 0.00 0.01 16 up osd.2
TOTAL 12 TiB 29 GiB 28 GiB 0 B 340 MiB 12 TiB 0.23
MIN/MAX VAR: 0.01/1.99 STDDEV: 0.23
```