# In-Transit compression- Dev Preview: ODF Internal Mode
## Important
**Applying compression prior to encryption has security implications as it reduces the level of security of messages between peers.
Applying compression after encryption is not efficient and the cost reduction would be very minimal.
So In case both encryption and compression are enabled, compression setting will be ignored by ceph and message will not be compressed. This behaviour can be overriden [by referering this.](https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#confval-ms_compress_secure)**
## Cases
1. Compression(✅)
Storagecluster created with Compression Enabled
2. nil--->Compression(✅)
Storagecluster created without encryption or compression, Compression is enabled by patching storagecluster
* 2.1: ODF 4.14
* 2.2: ODF 4.12->4.13->4.14
3. Compression--->nil(✅)
Compression is disabled afterwards
4. Compression,Encryption(❌)
Both encryption & Compression are enabled together
❌ - Encryption & Compression don't work together, only encryption will work
❌ - Encryption can't be turned off on the fly for existing volumes using it
---
## Case-1: Storagecluster created without encryption, with Compression Enabled
### Creation of storagecluster
While creating storagecluster include compression enabled true in network connections spec
```
network:
connections:
compression:
enabled: true
```
Wait till the storagecluster becomes ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 62m Ready 2023-07-27T12:15:47Z 4.14.0
```
### Verify if the compression settings are applied on ceph
#### Enable ceph tool box & rsh into it
#### Check compression setting ms_osd_compress_mode is present & has value force
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#confval-ms_osd_compress_mode`
#### Check rbd_default_map_options has value ms_mode=prefer-crc
```
sh-5.1$ ceph config dump | grep ms
global advanced ms_osd_compress_mode force
global advanced rbd_default_map_options ms_mode=prefer-crc *
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compression-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep compression
compression-rbd-pvc Bound pvc-1cf5f163-1304-4f40-9233-40f939e5ee8e 1Gi RWO ocs-storagecluster-ceph-rbd 9s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: compression-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: compression-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep compression
compression-rbd-pod 1/1 Running 0 23s
```
#### rsh into the pod and try using the mounted volume
```
$ oc rsh compression-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC, Mounting it in a Pod & checking the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compression-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep compression
compression-cephfs-pvc Bound pvc-ec052f7b-fdca-4644-a3d0-780fdda944c5 1Gi RWO ocs-storagecluster-cephfs 9s
compression-rbd-pvc Bound pvc-1cf5f163-1304-4f40-9233-40f939e5ee8e 1Gi RWO ocs-storagecluster-ceph-rbd 14m
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: compression-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: compression-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep compression
compression-cephfs-pod 1/1 Running 0 19s
compression-rbd-pod 1/1 Running 0 14m
```
#### rsh into the pod & try using the mounted volume
```
$ oc rsh compression-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
## Case 2.1: Compression is enabled by patching storagecluster(ODF 4.14)
### Creating storagecluster without encryption or compression
While creating storagecluster don't include encryption enabled or compression enabled in network connections Spec.
Wait for the storagecluster to become ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 75m Ready 2023-07-27T12:15:47Z 4.14.0
```
---
### Create workload to test later after enabling compression
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep normal
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: normal-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-rbd-pod 1/1 Running 0 27s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC ,Mounting it in a Pod & using the volume
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep normal
normal-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: normal-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-cephfs-pod 1/1 Running 0 31s
normal-rbd-pod 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
### Enable compression by patching the storagecluster
Patch for making compression enabled true in storagecluster network connections spec
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network", "value": {"connections": {"compression": {"enabled": true}}} }]'
```
Wait for the storagecluster to get ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 141m Ready 2023-01-19T11:56:18Z 4.12.0
```
All the ceph daemons(mons, mgrs, osds, mds) will now restart one by one, Wait for around 10 mins for all restarts to happen
### Check compression settings on ceph
#### Enable ceph tool box & rsh into it
#### Check compression setting ms_osd_compress_mode is present & have value force
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#confval-ms_osd_compress_mode`
#### Check rbd_default_map_option has value ms_mode=prefer-crc
```
sh-5.1$ ceph config dump | grep ms
global advanced ms_osd_compress_mode force
global advanced rbd_default_map_options ms_mode=prefer-crc *
```
#### Check ceph mon details, if they have only v2 addresses
```
sh-4.4$ ceph mon dump
epoch 2
fsid 0333f009-78b1-4abd-8f20-4a1e90c76efb
last_changed 2023-05-22T16:40:49.972856+0000
created 2023-05-22T16:40:21.274446+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: v2:172.30.189.12:3300/0 mon.b
1: v2:172.30.251.50:3300/0 mon.a
2: v2:172.30.173.180:3300/0 mon.c
dumped monmap epoch 2
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compression-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep compression
compression-rbd-pvc Bound pvc-db4884c5-3d88-46db-ad16-cb74b0f99fd8 1Gi RWO ocs-storagecluster-ceph-rbd 17s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: compression-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: compression-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep compression
compression-rbd-pod 1/1 Running 0 16s
```
#### rsh into the pod and try using the mounted volume
```
$ oc rsh compression-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC, Mounting it in a Pod & checking the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compression-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep compression
compression-cephfs-pvc Bound pvc-ad9ec652-3042-45fd-a73d-53d8b4604a08 1Gi RWO ocs-storagecluster-cephfs 9s
compression-rbd-pvc Bound pvc-db4884c5-3d88-46db-ad16-cb74b0f99fd8 1Gi RWO ocs-storagecluster-ceph-rbd 4m27s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: compression-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: compression-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep compression
encryption-compression-cephfs-pod 1/1 Running 0 23s
encryption-compression-rbd-pod 1/1 Running 0 2m25s
```
#### rsh into the pod & try using the mounted volume
```
$ oc rsh compression-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
### Verify if the earlied created volumes are accesible
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
---
## Case 2.2: Compression is enabled by patching storagecluster CR (ODF 4.12-->ODF 4.13-->ODF 4.14)
### Install ODF 4.12 & create storagecluster, Wait for it to be ready
### Create some 4.12 workload to test later after enabling compression
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 4-12-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep 4-12
4-12-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: 4-12-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: 4-12-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep 4-12
4-12-rbd-pod 1/1 Running 0 27s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh 4-12-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 4-12-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep 4-12
4-12-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
4-12-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: 4-12-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: 4-12-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep 4-12
4-12-cephfs-pod 1/1 Running 0 31s
4-12-rbd-pod 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh 4-12-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
### Upgrade to ODF 4.13, Wait for storagecluster to get ready
### Create some 4.13 workload to test later after enabling compression
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 4-13-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep 4-13
4-13-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: 4-13-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: 4-13-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep 4-13
4-13-rbd-pod 1/1 Running 0 27s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh 4-13-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 4-13-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep 4-13
4-13-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
4-13-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: 4-13-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: 4-13-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep 4-13
4-13-cephfs-pod 1/1 Running 0 31s
4-13-rbd-pod 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh 4-13-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
### Upgrade to ODF 4.14, Wait for storagecluster to get ready
### Enable compression by patching the storagecluster
Patch for making compression enabled true in storagecluster network connections spec
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network", "value": {"connections": {"compression": {"enabled": true}}} }]'
```
Wait for the storagecluster to get ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 141m Ready 2023-01-19T11:56:18Z 4.12.0
```
All the ceph daemons(mons, mgrs, osds, mds) will now restart one by one, Wait for around 10 mins for all restarts to happen
### Check compression settings on ceph
#### Enable ceph tool box & rsh into it
#### Check compression setting ms_osd_compress_mode is present & have value force
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#confval-ms_osd_compress_mode`
#### Check rbd_default_map_option has value ms_mode=prefer-crc
```
sh-5.1$ ceph config dump | grep ms
global advanced ms_osd_compress_mode force
global advanced rbd_default_map_options ms_mode=prefer-crc *
```
#### Check ceph mon details, if they have both v1 & v2 addresses
```
sh-4.4$ ceph mon dump
epoch 2
fsid 0333f009-78b1-4abd-8f20-4a1e90c76efb
last_changed 2023-05-22T16:40:49.972856+0000
created 2023-05-22T16:40:21.274446+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: v2:172.30.189.12:3300/0 mon.b
1: v2:172.30.251.50:3300/0 mon.a
2: v2:172.30.173.180:3300/0 mon.c
dumped monmap epoch 2
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compression-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep compression
compression-rbd-pvc Bound pvc-db4884c5-3d88-46db-ad16-cb74b0f99fd8 1Gi RWO ocs-storagecluster-ceph-rbd 17s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: compression-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: compression-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep compression
compression-rbd-pod 1/1 Running 0 16s
```
#### rsh into the pod and try using the mounted volume
```
$ oc rsh compression-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC, Mounting it in a Pod & checking the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compression-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep compression
compression-cephfs-pvc Bound pvc-ad9ec652-3042-45fd-a73d-53d8b4604a08 1Gi RWO ocs-storagecluster-cephfs 9s
compression-rbd-pvc Bound pvc-db4884c5-3d88-46db-ad16-cb74b0f99fd8 1Gi RWO ocs-storagecluster-ceph-rbd 4m27s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: compression-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: compression-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep compression
encryption-compression-cephfs-pod 1/1 Running 0 23s
encryption-compression-rbd-pod 1/1 Running 0 2m25s
```
#### rsh into the pod & try using the mounted volume
```
$ oc rsh compression-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
### Check if 4-12 & 4-13 volumes are usable
```
~ $ oc rsh 4-12-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh 4-12-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
```
~ $ oc rsh 4-13-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh 4-13-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
## Case 3: Compression is disabled afterwards
### Patch the storagecluster CR
Patch the storagecluster to mark compression enabled as false
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network/connections/compression", "value": {"enabled": false} }]'
```
Wait for the storagecluster to get ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 80m Ready 2023-07-27T12:15:47Z 4.14.0
```
All the ceph daemons(mons, mgrs, osds, mds) will now restart one by one, Wait for around 10 mins for all restarts to happen
### Check compression setting is removed on ceph
#### Check compression setting ms_osd_compress_mode is removed
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#confval-ms_osd_compress_mode`
```
~ $ oc rsh rook-ceph-tools-7986747df4-t8nxn
sh-5.1$ ceph config dump | grep ms
global advanced rbd_default_map_options ms_mode=prefer-crc *
```
---
### Create workload to test after disabling compression
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-rbd-pvc-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep normal
normal-cephfs-pvc Bound pvc-aeb945fa-c428-4b7f-9d70-2003b5937804 1Gi RWO ocs-storagecluster-cephfs 40m
normal-rbd-pvc Bound pvc-7f422393-90bd-4325-be20-16e0f2221f03 1Gi RWO ocs-storagecluster-ceph-rbd 42m
normal-rbd-pvc-1 Bound pvc-d9d4477b-5fcc-41ab-8347-a0efa7a5237a 1Gi RWO ocs-storagecluster-ceph-rbd 8s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-rbd-pod-1
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: normal-rbd-pvc-1
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-cephfs-pod 1/1 Running 0 41m
normal-rbd-pod 1/1 Running 0 42m
normal-rbd-pod-1 1/1 Running 0 14s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-rbd-pod-1
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC ,Mounting it in a Pod & using the volume
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-cephfs-pvc-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep normal
normal-cephfs-pvc Bound pvc-aeb945fa-c428-4b7f-9d70-2003b5937804 1Gi RWO ocs-storagecluster-cephfs 43m
normal-cephfs-pvc-1 Bound pvc-27ffd60b-b40b-4fab-858e-c7da9e9b5756 1Gi RWO ocs-storagecluster-cephfs 11s
normal-rbd-pvc Bound pvc-7f422393-90bd-4325-be20-16e0f2221f03 1Gi RWO ocs-storagecluster-ceph-rbd 44m
normal-rbd-pvc-1 Bound pvc-d9d4477b-5fcc-41ab-8347-a0efa7a5237a 1Gi RWO ocs-storagecluster-ceph-rbd 2m24s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-cephfs-pod-1
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: normal-cephfs-pvc-1
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-cephfs-pod-1 1/1 Running 0 31s
normal-rbd-pod-1 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-cephfs-pod-1
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
### Verify if the earlied created volumes are accesible
```
~ $ oc rsh compression-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh compression-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
---
## Case 4: Both encryption & Compression are enabled together
### Create storagecluster with both encryption & compression
Either While creating storagecluster include encryption enabled and compression enabled in network connections Spec.
```
network:
connections:
encryption:
enabled: true
compression:
enabled: true
```
Or
Patch an existing storagecluster with both options
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network", "value": {"connections": {"encryption": {"enabled": true},"compression": {"enabled": true}}} }]'
```
Wait for the storagecluster to become ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 8m21s Ready 2023-05-22T17:44:38Z 4.13.0
```
---
### Check warning log in ocs-operator pod regarding both encryption & compression being enabled together.
```
{"level":"info","ts":"2023-05-23T08:12:43Z","logger":"controllers.StorageCluster","msg":"Both in-transit encryption & compression are enabled. To protect security of encrypted messages ceph will ignore compression","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"}
```
### Check compression & encryption settings on ceph
#### Enable ceph tool box & rsh into it
#### Check encryption settings like ms_client_mode, ms_cluster_mode, ms_service_mode are present & have value secure, rbd_default_map_options has value ms_mode=secure
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#connection-mode-configuration-options`
#### Check compression setting ms_osd_compress_mode is present & have value force
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#confval-ms_osd_compress_mode`
```
sh-5.1$ ceph config dump | grep ms
global basic ms_client_mode secure *
global basic ms_cluster_mode secure *
global advanced ms_osd_compress_mode force
global basic ms_service_mode secure *
global advanced rbd_default_map_options ms_mode=secure *
```
---
**This is an unsupported configuration and we should not proceed from here onwards**
We have to decide if we want to use encryption or compression.
---
### If we want to continue with encryption only
#### Patch storagecluster
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network", "value": {"connections": {"compression": {"enabled": false}}} }]'
```
---
### If we want to continue with compression only
#### Patch storagecluster
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network", "value": {"connections": {"encryption": {"enabled": false}}} }]'
```