# In-Transit encryption: ODF Internal Mode
* Case-1: ODF 4.13, with encryption enabled during creation of storagesystem
-From UI while storagesystem creation by ticking checkbox
-From CLI with encryption in storagecluster spec while creation.
(This is the only officially supported way to use encryption)
* Case-2: ODF 4.13, without enabling encryption during storagesystem creation
* Case-3: An cluster is upgraded from ODF 4.12 to ODF 4.13
* Case-4: In an cluster created with encryption, due to some issues encryption needs to be turned off
## Case-1: ODF 4.13, with encryption enabled during creation of storagesystem
### Enabling Encryption while storagesystem creation
* ##### While creating the storagesystem tick the checkboc for enabling in-transit encryption 
```
network:
connections:
encryption:
enabled: true
```
* ##### Wait till the storagecluster becomes ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 9m9s Ready 2023-04-28T05:42:29Z 4.13.0
```
---
### Verify if the encryption settings are applied on ceph
#### Enable ceph tool box & rsh into it
```
~ $ oc patch ocsinitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'
```
```
~ $ oc get pods | grep tool
rook-ceph-tools-686df9f5f-w8c76 1/1 Running 0 15m
```
```
~ $ oc rsh rook-ceph-tools-686df9f5f-w8c76
sh-5.1$
```
#### Check encryption settings like ms_client_mode, ms_cluster_mode, ms_service_mode are present & have value secure
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#connection-mode-configuration-options`
```
sh-5.1$ ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file true
global advanced mon_allow_pool_delete true
global advanced mon_allow_pool_size_one true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global basic ms_client_mode secure *
global basic ms_cluster_mode secure *
global basic ms_service_mode secure *
global advanced rbd_default_map_options ms_mode=secure *
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/mode upmap
mgr advanced mgr/prometheus/rbd_stats_pools ocs-storagecluster-cephblockpool *
mds.ocs-storagecluster-cephfilesystem-a basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-a basic mds_join_fs ocs-storagecluster-cephfilesystem
mds.ocs-storagecluster-cephfilesystem-b basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-b basic mds_join_fs ocs-storagecluster-cephfilesystem
```
#### Check ceph mon details, if they have only v2 addresses
```
sh-5.1$ ceph mon dump
epoch 3
fsid b21556cc-13f4-405c-a2e8-5cda2acfc2c9
last_changed 2023-04-28T05:45:41.742752+0000
created 2023-04-28T05:44:56.475472+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: v2:172.30.98.133:3300/0 mon.a
1: v2:172.30.76.134:3300/0 mon.b
2: v2:172.30.171.221:3300/0 mon.c
dumped monmap epoch 3
```
#### Check if csi configmap is configured to use the v2 3300 port
```
~ $ oc get cm rook-ceph-csi-config -oyaml
apiVersion: v1
data:
csi-cluster-config-json: '[{"clusterID":"openshift-storage","monitors":["172.30.98.133:3300","172.30.76.134:3300","172.30.171.221:3300"],"namespace":"openshift-storage"}]'
kind: ConfigMap
metadata:
creationTimestamp: "2023-04-28T05:42:33Z"
name: rook-ceph-csi-config
namespace: openshift-storage
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: false
controller: true
kind: Deployment
name: rook-ceph-operator
uid: 8e15b114-ed89-4937-bfee-fd2d4ff3b61a
resourceVersion: "412095"
uid: eec76097-c012-4900-9383-fd2dd33bad9c
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encryption-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep encryption
encryption-rbd-pvc Bound pvc-c271e73b-92a4-4710-9a6b-71a6d799830f 1Gi RWO ocs-storagecluster-ceph-rbd 67s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: encryption-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: encryption-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep encryption
encryption-rbd-pod 1/1 Running 0 2m41s
```
#### rsh into the pod and try using the mounted volume
```
$ oc rsh encryption-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC, Mounting it in a Pod & checking the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encryption-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep encryption
encryption-cephfs-pvc Bound pvc-25e473e2-a013-4707-bf09-14d99e7482f5 1Gi RWO ocs-storagecluster-cephfs 19s
encryption-rbd-pvc Bound pvc-c271e73b-92a4-4710-9a6b-71a6d799830f 1Gi RWO ocs-storagecluster-ceph-rbd 67s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: encryption-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: encryption-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep encryption
encryption-cephfs-pod 0/1 ContainerCreating 0 43s
encryption-rbd-pod 1/1 Running 0 2m41s
```
#### rsh into the pod & try using the mounted volume
```
$ oc rsh encryption-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
## Case 2: ODF 4.13, without enabling encryption during storagesystem creation
**Important**
*If encryption is not enabled during storagesystem creation, enabling it afterwards although possible is not a officially supported operation.*
---
### Creating storagesystem without encryption
* While creating storagesystem leave the in-transit encryption tickbox unchecked.
If creating from CLI don't include encryption enabled true in Network Connections Spec.
* Wait for the storagecluster to become ready
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep normal
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: normal-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-rbd-pod 1/1 Running 0 27s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC ,Mounting it in a Pod & using the volume
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep normal
normal-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: normal-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-cephfs-pod 1/1 Running 0 31s
normal-rbd-pod 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
**Important**
*The below operations to enable encryption in this case are not officially supported and are just for reference only.*
**Very Very Important**
*Enabling in-transit encryption doesnt affect the existing mapped/mounted volumes. Once a volume is mapped/mounted it will keep using the setting it was given during its mapping/mounting, doesn't matter if the encryption setting has changed on the cluster.
The new setting will affect only any new volume you create. For old volumes to use the new settings they have to be remapped/remounted again one by one.*
---
### Enable in-transit encryption
* If created from UI patch the storagecluster to enable in-transit encryption
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network/connections/encryption", "value": {"enabled": true} }]'
```
If created from CLI edit the storagecluster to add encryption enabled true in storagecluster spec
```
network:
connections:
encryption:
enabled: true
```
* Wait for the storagecluster to get ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 141m Ready 2023-01-19T11:56:18Z 4.12.0
```
---
### Verify if the encryption settings are applied on ceph
#### Enable ceph tool box & rsh into it
```
~ $ oc patch ocsinitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'
```
```
~ $ oc get pods | grep tool
rook-ceph-tools-686df9f5f-w8c76 1/1 Running 0 15m
```
```
~ $ oc rsh rook-ceph-tools-686df9f5f-w8c76
sh-5.1$
```
#### Check encryption settings like ms_client_mode, ms_cluster_mode, ms_service_mode are present & have value secure
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#connection-mode-configuration-options`
```
sh-5.1$ ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file true
global advanced mon_allow_pool_delete true
global advanced mon_allow_pool_size_one true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global basic ms_client_mode secure *
global basic ms_cluster_mode secure *
global basic ms_service_mode secure *
global advanced rbd_default_map_options ms_mode=secure *
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/mode upmap
mgr advanced mgr/prometheus/rbd_stats_pools ocs-storagecluster-cephblockpool *
mds.ocs-storagecluster-cephfilesystem-a basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-a basic mds_join_fs ocs-storagecluster-cephfilesystem
mds.ocs-storagecluster-cephfilesystem-b basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-b basic mds_join_fs ocs-storagecluster-cephfilesystem
```
#### Check ceph mon details, if they have only v2 addresses
```
sh-5.1$ ceph mon dump
epoch 3
fsid b21556cc-13f4-405c-a2e8-5cda2acfc2c9
last_changed 2023-04-28T05:45:41.742752+0000
created 2023-04-28T05:44:56.475472+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: v2:172.30.98.133:3300/0 mon.a
1: v2:172.30.76.134:3300/0 mon.b
2: v2:172.30.171.221:3300/0 mon.c
dumped monmap epoch 3
```
#### Check if csi configmap is configured to use the v2 3300 port
```
~ $ oc get cm rook-ceph-csi-config -oyaml
apiVersion: v1
data:
csi-cluster-config-json: '[{"clusterID":"openshift-storage","monitors":["172.30.98.133:3300","172.30.76.134:3300","172.30.171.221:3300"],"namespace":"openshift-storage"}]'
kind: ConfigMap
metadata:
creationTimestamp: "2023-04-28T05:42:33Z"
name: rook-ceph-csi-config
namespace: openshift-storage
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: false
controller: true
kind: Deployment
name: rook-ceph-operator
uid: 8e15b114-ed89-4937-bfee-fd2d4ff3b61a
resourceVersion: "412095"
uid: eec76097-c012-4900-9383-fd2dd33bad9c
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encryption-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep encryption
encryption-rbd-pvc Bound pvc-c271e73b-92a4-4710-9a6b-71a6d799830f 1Gi RWO ocs-storagecluster-ceph-rbd 67s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: encryption-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: encryption-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep encryption
encryption-rbd-pod 1/1 Running 0 2m41s
```
#### rsh into the pod and try using the mounted volume
```
$ oc rsh encryption-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC, Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encryption-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep encryption
encryption-cephfs-pvc Bound pvc-25e473e2-a013-4707-bf09-14d99e7482f5 1Gi RWO ocs-storagecluster-cephfs 19s
encryption-rbd-pvc Bound pvc-c271e73b-92a4-4710-9a6b-71a6d799830f 1Gi RWO ocs-storagecluster-ceph-rbd 67s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: encryption-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: encryption-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep encryption
encryption-cephfs-pod 0/1 ContainerCreating 0 43s
encryption-rbd-pod 1/1 Running 0 2m41s
```
#### rsh into the pod & try using the mounted volume
```
$ oc rsh encryption-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
### Verify if the earlied created volumes are accesible
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
---
## Case-3: An cluster is upgraded from ODF 4.12 to ODF 4.13
### Setting up odf 4.12
#### Install ODF 4.12 by choosing the appropriate subscription channel
```
~ $ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
mcg-operator.v4.12.2-rhodf NooBaa Operator 4.12.2-rhodf mcg-operator.v4.12.1 Succeeded
ocs-operator.v4.12.2-rhodf OpenShift Container Storage 4.12.2-rhodf ocs-operator.v4.12.1 Succeeded
odf-csi-addons-operator.v4.12.2-rhodf CSI Addons 4.12.2-rhodf odf-csi-addons-operator.v4.12.1 Succeeded
odf-operator.v4.12.2-rhodf OpenShift Data Foundation 4.12.2-rhodf odf-operator.v4.12.1 Succeeded
```
#### Create a storagecluster & wait for it to be ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 6m19s Ready 2023-04-28T20:15:18Z 4.12.0
```
---
### Create some workload to test after upgrade
* #### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 4-12-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep 4-12
4-12-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: 4-12-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: 4-12-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep 4-12
4-12-rbd-pod 1/1 Running 0 27s
```
* #### rsh into the pod & create a file on the volume
```
~ $ oc rsh 4-12-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
* #### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 4-12-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep 4-12
4-12-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
4-12-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: 4-12-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: 4-12-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep 4-12
4-12-cephfs-pod 1/1 Running 0 31s
4-12-rbd-pod 1/1 Running 0 6m47s
```
* #### rsh into the pod & create a file on the volume
```
~ $ oc rsh 4-12-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
### Upgrade to ODF 4.13
#### Wait for the upgrade to complete & the storagecluster to get ready
```
~ $ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
mcg-operator.v4.13.0-179.stable NooBaa Operator 4.13.0-179.stable mcg-operator.v4.12.2-rhodf Succeeded
ocs-operator.v4.13.0-179.stable OpenShift Container Storage 4.13.0-179.stable ocs-operator.v4.12.2-rhodf Succeeded
odf-csi-addons-operator.v4.13.0-179.stable CSI Addons 4.13.0-179.stable odf-csi-addons-operator.v4.12.2-rhodf Succeeded
odf-operator.v4.13.0-179.stable OpenShift Data Foundation 4.13.0-179.stable odf-operator.v4.12.2-rhodf Succeeded
```
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 142m Ready 2023-04-28T20:37:44Z 4.13.0
```
#### Check if existing workload is usable
```
~ $ oc rsh 4-12-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh 4-12-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
### Check ceph settings
#### Enable ceph tool box & rsh into it
```
~ $ oc patch ocsinitialization ocsinit -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/enableCephTools", "value": true }]'
```
```
~ $ oc get pods | grep tool
rook-ceph-tools-686df9f5f-w8c76 1/1 Running 0 15m
```
```
~ $ oc rsh rook-ceph-tools-686df9f5f-w8c76
sh-5.1$
```
#### Ceph config
```
~ $ oc rsh rook-ceph-tools-7c5884d455-k8cjj
sh-5.1$ ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file true
global advanced mon_allow_pool_delete true
global advanced mon_allow_pool_size_one true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global advanced osd_scrub_auto_repair true
global advanced rbd_default_map_options ms_mode=prefer-crc *
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/mode upmap
mgr advanced mgr/prometheus/rbd_stats_pools ocs-storagecluster-cephblockpool *
mds.ocs-storagecluster-cephfilesystem-a basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-a basic mds_join_fs ocs-storagecluster-cephfilesystem
mds.ocs-storagecluster-cephfilesystem-b basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-b basic mds_join_fs ocs-storagecluster-cephfilesystem
```
#### Both v1 & v2 port are present on the ceph mon
```
sh-5.1$ ceph mon dump
epoch 4
fsid 053e09a7-bf58-4312-a3b9-2d05b5723fbf
last_changed 2023-04-30T20:56:08.009439+0000
created 2023-04-30T20:37:36.381975+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:172.30.15.98:3300/0,v1:172.30.15.98:6789/0] mon.a
1: [v2:172.30.230.232:3300/0,v1:172.30.230.232:6789/0] mon.b
2: [v2:172.30.44.179:3300/0,v1:172.30.44.179:6789/0] mon.c
dumped monmap epoch 4
```
#### v2 3300 ports are being used by ceph-csi
```
~ $ oc get cm rook-ceph-csi-config -o yaml
apiVersion: v1
data:
csi-cluster-config-json: '[{"clusterID":"openshift-storage","monitors":["172.30.15.98:3300","172.30.230.232:3300","172.30.44.179:3300"],"namespace":"openshift-storage"}]'
kind: ConfigMap
metadata:
creationTimestamp: "2023-04-30T20:34:47Z"
name: rook-ceph-csi-config
namespace: openshift-storage
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: false
controller: true
kind: Deployment
name: rook-ceph-operator
uid: ccc89645-575a-40f1-b003-9a60d114ff59
resourceVersion: "48347"
uid: b410df16-f00b-49fa-b499-5c3afac8b4e5
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep normal
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: normal-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-rbd-pod 1/1 Running 0 27s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC ,Mounting it in a Pod & using the volume
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep normal
normal-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: normal-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-cephfs-pod 1/1 Running 0 31s
normal-rbd-pod 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
**Important**
*The below operations to enable encryption in this case are not officially supported and are just for reference only.*
**Very Very Important**
*Enabling in-transit encryption doesnt affect the existing mapped/mounted volumes. Once a volume is mapped/mounted it will keep using the setting it was given during its mapping/mounting, doesn't matter if the encryption setting has changed on the cluster.
The new setting will affect only any new volume you create. For old volumes to use the new settings they have to be remapped/remounted again one by one.*
---
### Enable in-transit encryption
* Edit the storagecluster to add encryption enabled true in storagecluster spec
```
network:
connections:
encryption:
enabled: true
```
* Wait for the storagecluster to get ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 141m Ready 2023-01-19T11:56:18Z 4.12.0
```
---
### Verify if the encryption settings are applied on ceph
#### Check encryption settings like ms_client_mode, ms_cluster_mode, ms_service_mode are present & have value secure
`https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#connection-mode-configuration-options`
```
sh-5.1$ ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file true
global advanced mon_allow_pool_delete true
global advanced mon_allow_pool_size_one true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global basic ms_client_mode secure *
global basic ms_cluster_mode secure *
global basic ms_service_mode secure *
global advanced rbd_default_map_options ms_mode=secure *
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/mode upmap
mgr advanced mgr/prometheus/rbd_stats_pools ocs-storagecluster-cephblockpool *
mds.ocs-storagecluster-cephfilesystem-a basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-a basic mds_join_fs ocs-storagecluster-cephfilesystem
mds.ocs-storagecluster-cephfilesystem-b basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-b basic mds_join_fs ocs-storagecluster-cephfilesystem
```
#### Check ceph mon details, they would have both v1 & v2 addresses
```
sh-5.1$ ceph mon dump
epoch 4
fsid 053e09a7-bf58-4312-a3b9-2d05b5723fbf
last_changed 2023-04-30T20:56:08.009439+0000
created 2023-04-30T20:37:36.381975+0000
min_mon_release 17 (quincy)
election_strategy: 1
0: [v2:172.30.15.98:3300/0,v1:172.30.15.98:6789/0] mon.a
1: [v2:172.30.230.232:3300/0,v1:172.30.230.232:6789/0] mon.b
2: [v2:172.30.44.179:3300/0,v1:172.30.44.179:6789/0] mon.c
dumped monmap epoch 4
```
#### Check if csi configmap is configured to use the v2 3300 port
```
~ $ oc get cm rook-ceph-csi-config -oyaml
apiVersion: v1
data:
csi-cluster-config-json: '[{"clusterID":"openshift-storage","monitors":["172.30.98.133:3300","172.30.76.134:3300","172.30.171.221:3300"],"namespace":"openshift-storage"}]'
kind: ConfigMap
metadata:
creationTimestamp: "2023-04-28T05:42:33Z"
name: rook-ceph-csi-config
namespace: openshift-storage
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: false
controller: true
kind: Deployment
name: rook-ceph-operator
uid: 8e15b114-ed89-4937-bfee-fd2d4ff3b61a
resourceVersion: "412095"
uid: eec76097-c012-4900-9383-fd2dd33bad9c
```
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encryption-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep encryption
encryption-rbd-pvc Bound pvc-c271e73b-92a4-4710-9a6b-71a6d799830f 1Gi RWO ocs-storagecluster-ceph-rbd 67s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: encryption-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: encryption-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep encryption
encryption-rbd-pod 1/1 Running 0 2m41s
```
#### rsh into the pod and try using the mounted volume
```
$ oc rsh encryption-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC, Mounting it in a Pod & using the volume
#### Creating & Checking PVC
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encryption-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep encryption
encryption-cephfs-pvc Bound pvc-25e473e2-a013-4707-bf09-14d99e7482f5 1Gi RWO ocs-storagecluster-cephfs 19s
encryption-rbd-pvc Bound pvc-c271e73b-92a4-4710-9a6b-71a6d799830f 1Gi RWO ocs-storagecluster-ceph-rbd 67s
```
#### Creating & Checking Pod
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: encryption-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: encryption-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep encryption
encryption-cephfs-pod 0/1 ContainerCreating 0 43s
encryption-rbd-pod 1/1 Running 0 2m41s
```
#### rsh into the pod & try using the mounted volume
```
$ oc rsh encryption-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---
### Verify if the earlied created normal volumes are accesible
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
# sync && exit
```
---
### Verify if the earlied created 4-12 volumes are accesible
```
~ $ oc rsh 4-12-rbd-pod
# cd /usr/share/nginx/html
# ls
a b lost+found
# touch c
# ls
a b c lost+found
# sync && exit
```
```
~ $ oc rsh 4-12-cephfs-pod
# cd /usr/share/nginx/html
# ls
a b
# touch c
# ls
a b c
# sync && exit
```
---
## Case 4: Disabling in-transit encryption
(This case should be tested after case-1 not after other cases)
**Important**
This is not a officially supported action. If someone wants to don't use in-transit encryption anymore, They should delete the storagecluster & recreate another one without encryption.
**Important**
Removing the encryption settings/disabling encryption doesnt affect the existing mapped/mounted volumes. Once a volume is mapped/mounted it will keep using the setting it was given during its mapping/mounting, doesn't matter if the encryption setting has changed on the cluster. The new setting will affect only any new volume you create. For old volumes to use the new settings they have to be remapped/remounted again one by one.
---
#### Putting the encryption enabled flag to false
```
oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/network/connections/encryption", "value": {"enabled": false} }]'
```
#### Wait for the storagecluster to get ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 141m Ready 2023-01-19T11:56:18Z 4.12.0
```
---
### Verify if the encryption settings are removed from ceph config
#### rsh into ceph toolbox pod
```
~ $ oc get pods | grep tool
rook-ceph-tools-686df9f5f-w8c76 1/1 Running 0 15m
```
```
~ $ oc rsh rook-ceph-tools-686df9f5f-w8c76
sh-4.4$
```
#### Check encryption settings like ms_client_mode, ms_cluster_mode, ms_service_mode are removed & rbd_default_map_options is set to ms_mode=prefer-crc
```
sh-5.1$ ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic log_to_file true
global advanced mon_allow_pool_delete true
global advanced mon_allow_pool_size_one true
global advanced mon_cluster_log_file
global advanced mon_pg_warn_min_per_osd 0
global advanced osd_scrub_auto_repair true
global advanced rbd_default_map_options ms_mode=prefer-crc *
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/balancer/mode upmap
mgr advanced mgr/prometheus/rbd_stats_pools ocs-storagecluster-cephblockpool *
mds.ocs-storagecluster-cephfilesystem-a basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-a basic mds_join_fs ocs-storagecluster-cephfilesystem
mds.ocs-storagecluster-cephfilesystem-b basic mds_cache_memory_limit 4294967296
mds.ocs-storagecluster-cephfilesystem-b basic mds_join_fs ocs-storagecluster-cephfilesystem
```
### rsh into both rbd & cephfs pod to check the volumes
```
$ oc rsh encryption-rbd-pod
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
#
```
```
$ oc rsh encryption-cephfs-pod
# cd /usr/share/nginx/html
# ls
a
# touch b
# ls
a b
#
```
---
### For existing workload to stop using the in-transit encryption the volumes have to be remapped/remounted.
---
### Test by creating rbd PVC ,Mounting it in a Pod & using the volume
#### Create a rbd pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd
EOF
```
```
~ $ oc get pvc | grep normal
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 30s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: normal-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-rbd-pod 1/1 Running 0 27s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
---
### Test by creating cephfs PVC ,Mounting it in a Pod & using the volume
#### Create a cephfs pvc & a pod to use it
```
~ $ cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: normal-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-cephfs
EOF
```
```
~ $ oc get pvc | grep normal
normal-cephfs-pvc Bound pvc-5564dfa1-58d7-4679-82e6-f5280157bbe5 1Gi RWO ocs-storagecluster-cephfs 8s
normal-rbd-pvc Bound pvc-2001b9a3-0d28-475a-9c4f-56d820c331b5 1Gi RWO ocs-storagecluster-ceph-rbd 5m15s
```
```
~ % cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: normal-cephfs-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: cephfs-storage
persistentVolumeClaim:
claimName: normal-cephfs-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-storage
EOF
```
```
~ $ oc get pods | grep normal
normal-cephfs-pod 1/1 Running 0 31s
normal-rbd-pod 1/1 Running 0 6m47s
```
#### rsh into the pod & try using the mounted volume
```
~ $ oc rsh normal-cephfs-pod
# cd /usr/share/nginx/html
# ls
# touch a
# ls
a
# sync && exit
```
---