# Special storageclass for virtualization environment
https://issues.redhat.com/browse/RHSTOR-4521
We are watching for kubevirt crd, And when it's present we are creating a special storageclass with rxbounce option.
---
## Installation of kubevirt
### 1. By openshift-virtualization operator
The operator gets installed in the openshift-cnv namespace
#### Search & Install OpenShift Virtualization from OCP UI
#### Wait till the operator installation succeeds
#### Create a HyperConverged CR from the UI using the yaml view
#### Wait for it to reach Reconcile Complete Condition
### 2. By kubevirt-operator
Reference-https://kubevirt.io/quickstart_cloud/
#### Use kubectl to deploy the KubeVirt operator:
```
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
echo $VERSION
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
```
#### Again use kubectl to deploy the KubeVirt custom resource definitions:
```
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
```
---
## Verifying the storageclass
No matter which scenario the steps to verify the storageclass working remains exactly same
### 1. Check the created storageclasses
```
~ $ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 143m
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 143m
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 2m44s
ocs-storagecluster-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 2m44s
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 2m7s
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 73s
```
We can see the storageclass **ocs-storagecluster-ceph-rbd-virtualization**.
Now check if the storageclass has
> mounter: rbd
> mapOptions: "krbd:rxbounce"
in the parameters section.
```
~ $ oc get storageclass ocs-storagecluster-ceph-rbd-virtualization -oyaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
description: Provides RWO and RWX Block volumes suitable for Virtual Machine disks
storageclass.kubevirt.io/is-default-virt-class: "true"
creationTimestamp: "2023-11-16T11:18:55Z"
name: ocs-storagecluster-ceph-rbd-virtualization
resourceVersion: "76460"
uid: 69e00dae-79ed-4ef8-8f3e-502b0b9b6b1d
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff
imageFormat: "2"
mapOptions: krbd:rxbounce
mounter: rbd
pool: ocs-storagecluster-cephblockpool
provisioner: openshift-storage.rbd.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
### 2. Using the storageclass for a PVC & using the pvc
#### Create the pvc
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: virtualization-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd-virtualization
EOF
```
```
~ $ oc get pvc| grep virtualization
virtualization-rbd-pvc Bound pvc-c0758a82-3841-446f-b3ff-f7763253cad0 1Gi RWO ocs-storagecluster-ceph-rbd-virtualization 8s
```
#### Create a pod to mount the pvc
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: virtualization-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: virtualization-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pod | grep virtualization
virtualization-rbd-pod 1/1 Running 0 23s
```
#### rsh into the pod & use the volume
```
$ oc rsh virtualization-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
### 3. Check clone of the pvc
#### Create the clone
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: virtualization-rbd-pvc-clone
spec:
storageClassName: ocs-storagecluster-ceph-rbd-virtualization
dataSource:
name: virtualization-rbd-pvc
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
```
```
~ $ oc get pvc| grep virtualization
virtualization-rbd-pvc Bound pvc-c0758a82-3841-446f-b3ff-f7763253cad0 1Gi RWO ocs-storagecluster-ceph-rbd-virtualization 11m
virtualization-rbd-pvc-clone Bound pvc-359e99fa-acda-42c7-8895-f46176b34281 1Gi RWO ocs-storagecluster-ceph-rbd-virtualization 28s
```
#### Create a pod to mount the cloned pvc
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: virtualization-rbd-pod-clone
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: virtualization-rbd-pvc-clone
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pod | grep virtualization
virtualization-rbd-pod 1/1 Running 0 11m
virtualization-rbd-pod-clone 1/1 Running 0 9s
```
#### rsh into the pod & use the volume
```
~ $ oc rsh virtualization-rbd-pod-clone
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
We can see the earlier created file a is there too.
---
## Scenarios
1. kubevirt crd is already present when ODF 4.14 is installed.✅
2. kubevirt crd was already present, ODF is being upgraded from ODF 4.13 to 4.14.✅
3. kubevirt crd comes up later after ODF 4.14 is ready.✅
4. ODF External Mode-kubevirt crd comes up later after ODF 4.14 is ready.
## 1. kubevirt crd is already present when ODF 4.14 is installed
### 1.1: Follow any of the 2 methods mentioned above for installing kubevirt
### 1.2: Then Check if the crd is present
```
~ $ oc get crd | grep virtualmachines.kubevirt
virtualmachines.kubevirt.io 2023-06-12T12:19:50Z
```
### 1.3: Install ODF 4.14
### 1.4: Create storagecluster & wait for it to get Ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 6m20s Ready 2023-06-12T13:35:27Z 4.14.0
```
### 1.5: Check the storageclasses
```
~ $ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 3h16m
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 3h16m
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 55m
ocs-storagecluster-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 55m
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 55m
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 54m
```
We can see the **ocs-storagecluster-ceph-rbd-virtualization** storageclass is created
### 1.6: Use above steps for verification of the storageclass
---
## 2. kubevirt crd was already present, ODF is being upgraded from ODF 4.13 to 4.14.
### 2.1: Install ODF 4.13
### 2.2: Create a storagecluster & wait for it to get ready
### 2.3: Follow any of the 2 methods mentioned above for installing kubevirt
### 2.4: Then Check if the crd is present
```
~ $ oc get crd | grep virtualmachines.kubevirt
virtualmachines.kubevirt.io 2023-06-12T12:19:50Z
```
### 2.5: Upgrade ODF to 4.14
### 2.6: Wait for the storagecluster to be ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 6m20s Ready 2023-06-12T13:35:27Z 4.14.0
```
### 2.7: Check the storageclasses
```
~ $ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 3h16m
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 3h16m
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 55m
ocs-storagecluster-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 55m
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 55m
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 54m
```
We can see the **ocs-storagecluster-ceph-rbd-virtualization** storageclass is created
### 2.8: Use above steps for verification of the storageclass
---
## 3. kubevirt crd comes up later after ODF 4.14 is ready.
### 3.1: Check kubevirt crd is not present
```
~ $ oc get crd | grep virtualmachines.kubevirt
```
### 3.2: Install ODF 4.14
### 3.3: Create storagecluster & wait for it to get Ready
```
~ $ oc get storagecluster
NAME AGE PHASE EXTERNAL CREATED AT VERSION
ocs-storagecluster 10m Ready 2023-06-12T15:04:17Z 4.14.0
```
### 3.4 Check created storageclasses
```
~ $ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 3h57m
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 3h57m
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 6m53s
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 6m53s
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 6m10s
```
### 3.5: Follow any of the 2 methods mentioned above for installing kubevirt
### 3.6: Then Check if the crd is present
```
~ $ oc get crd | grep virtualmachines.kubevirt
virtualmachines.kubevirt.io 2023-06-12T12:19:50Z
```
### 3.7: Check the storageclasses
```
~ $ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 3h16m
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 3h16m
ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 55m
ocs-storagecluster-ceph-rbd-virtualization openshift-storage.rbd.csi.ceph.com Delete Immediate true 55m
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 55m
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 54m
```
We can see the **ocs-storagecluster-ceph-rbd-virtualization** storageclass is created
### 3.8: Use above steps for verification of the storageclass
---
## 4. ODF External Mode-kubevirt crd comes up later after ODF 4.14 is ready.
This featue is not dependant on how external mode works, so it should probably be independent of internal/external (both should work). So just for assurance testing out only 1 scenario.
### 4.1: Install ODF 4.14 in External Mode
### 4.2: Create a storagecluster wait for it to get ready & it's condition to get reconcile completed.
### 4.3: Follow any of the 2 methods mentioned at top for installing kubevirt
### 4.4: Then Check if the crd is present
```
~ $ oc get crd | grep virtualmachines.kubevirt
virtualmachines.kubevirt.io 2023-06-12T12:19:50Z
```
### 4.5: Check the storageclasses
```
```
### 4.6: Use below steps to verify the storageclass
## Verifying the storageclass in external Mode
No matter which scenario the steps to verify the storageclass working remains exactly same
### 1. Check the created storageclasses
```
~ $ oc get storageclass
```
We can see the storageclass **ocs-storagecluster-ceph-rbd-virtualization**.
Now check if the storageclass has
> mounter: rbd
> mapOptions: "krbd:rxbounce"
in the parameters section.
```
~ $ oc get storageclass ocs-storagecluster-ceph-rbd-virtualization -oyaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
description: Provides RWO and RWX Block volumes for virtualization environment
creationTimestamp: "2023-06-12T13:39:52Z"
name: ocs-storagecluster-ceph-rbd-virtualization
resourceVersion: "120815"
uid: 3691ace4-1d74-43a0-a931-93990f32eecb
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff
imageFormat: "2"
mapOptions: krbd:rxbounce
mounter: rbd
pool: ocs-storagecluster-cephblockpool
provisioner: openshift-storage.rbd.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
```
### 2. Using the storageclass for a PVC & using the pvc
#### Create the pvc
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: virtualization-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ocs-storagecluster-ceph-rbd-virtualization
EOF
```
```
~ $ oc get pvc| grep virtualization
virtualization-rbd-pvc Bound pvc-c0758a82-3841-446f-b3ff-f7763253cad0 1Gi RWO ocs-storagecluster-ceph-rbd-virtualization 8s
```
#### Create a pod to mount the pvc
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: virtualization-rbd-pod
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: virtualization-rbd-pvc
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pod | grep virtualization
virtualization-rbd-pod 1/1 Running 0 23s
```
#### rsh into the pod & use the volume
```
$ oc rsh virtualization-rbd-pod
# cd /usr/share/nginx/html
# ls
lost+found
# touch a
# ls
a lost+found
# sync && exit
```
### 3. Check clone of the pvc
#### Create the clone
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: virtualization-rbd-pvc-clone
spec:
storageClassName: ocs-storagecluster-ceph-rbd-virtualization
dataSource:
name: virtualization-rbd-pvc
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
```
```
~ $ oc get pvc| grep virtualization
virtualization-rbd-pvc Bound pvc-c0758a82-3841-446f-b3ff-f7763253cad0 1Gi RWO ocs-storagecluster-ceph-rbd-virtualization 11m
virtualization-rbd-pvc-clone Bound pvc-359e99fa-acda-42c7-8895-f46176b34281 1Gi RWO ocs-storagecluster-ceph-rbd-virtualization 28s
```
#### Create a pod to mount the cloned pvc
```
cat <<EOF | oc create -f -
apiVersion: v1
kind: Pod
metadata:
name: virtualization-rbd-pod-clone
spec:
nodeSelector:
topology.kubernetes.io/zone: us-east-1a
volumes:
- name: rbd-storage
persistentVolumeClaim:
claimName: virtualization-rbd-pvc-clone
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: rbd-storage
EOF
```
```
~ $ oc get pod | grep virtualization
virtualization-rbd-pod 1/1 Running 0 11m
virtualization-rbd-pod-clone 1/1 Running 0 9s
```
#### rsh into the pod & use the volume
```
~ $ oc rsh virtualization-rbd-pod-clone
# cd /usr/share/nginx/html
# ls
a lost+found
# touch b
# ls
a b lost+found
# sync && exit
```
We can see the earlier created file a is there too.
---