# Infinidat for OpenShift
This document will document the prerequisite steps for connecting OpenShift to an Infinidat Fibre Channel storage array (Infinibox???)
Documentation - Configuring RHEL 8 & RHEL 9 for Infinidat
https://support.infinidat.com/hc/en-us/articles/11985136447005-Setting-up-hosts-for-FC-on-RHEL-8-or-above-and-alternatives
Documentation - Installing the Infinidat CSI driver
https://support.infinidat.com/hc/en-us/articles/10106070174749-InfiniBox-CSI-Driver-for-Kubernetes-User-Guide
Example MachineConfig
https://access.redhat.com/solutions/7002456
## Configuring RHEL 8 & RHEL 9
:::info
Example is for setting multipath on the Worker nodes.
If Control Plane / Master nodes are also scheduleable, please create a second copy of the MachineConfig YAML that applies the same configurations to the Control Plane nodes.
:::
Create a butane file multipath-machineconfig-master.bu
```bash
vim multipath-machineconfig-worker.bu
```
```yaml=
variant: openshift
version: 4.14.0
metadata:
name: 99-worker-infinidat-multipath
labels:
machineconfiguration.openshift.io/role: worker
openshift:
kernel_arguments:
- elevator=none
storage:
files:
- path: /etc/modprobe.d/infinidat.conf
mode: 0644
overwrite: true
contents:
inline: |
options lpfc lpfc_hba_queue_depth=4096 lpfc_lun_queue_depth=128
options qla2xxx ql2xmaxqdepth=128
options fnic fnic_max_qdepth=128
- path: /etc/udev/rules.d/99-infinidat-queue.rules
mode: 0644
overwrite: true
contents:
inline: |
ACTION=="add|change", KERNEL=="sd[a-z]*", SUBSYSTEM=="block", ENV{ID_VENDOR}=="NFINIDAT", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_SERIAL}=="36742b0f*", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="sd[a-z]*", SUBSYSTEM=="block", ENV{ID_VENDOR}=="NFINIDAT", ATTR{queue/add_random}="0"
ACTION=="add|change", KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_SERIAL}=="36742b0f*", ATTR{queue/add_random}="0"
ACTION=="add|change", KERNEL=="sd[a-z]*", SUBSYSTEM=="block", ENV{ID_VENDOR}=="NFINIDAT", ATTR{queue/rq_affinity}="2"
ACTION=="add|change", KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_SERIAL}=="36742b0f*" , ATTR{queue/rq_affinity}="2"
- path: /etc/multipath.conf
mode: 0644
overwrite: true
contents:
inline: |
defaults {
find_multipaths "yes"
}
devices {
device {
vendor "NFINIDAT"
product "InfiniBox"
path_grouping_policy "group_by_prio"
path_checker "tur"
features 0
hardware_handler "1 alua"
prio "alua"
rr_weight "priorities"
no_path_retry "queue"
rr_min_io 1
rr_min_io_rq 1
flush_on_last_del "yes"
fast_io_fail_tmo 15
dev_loss_tmo "infinity"
path_selector "service-time 0"
failback "immediate"
detect_prio "no"
user_friendly_names "no"
}
}
```
Convert the butane file into MachineConfig YAML
```bash
#make directory
mkdir $HOME/bin
#grab latest butane for linux amd64
curl -o $HOME/bin/butane https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-amd64
#fix permissions
chmod a+x $HOME/bin/butane
#check version
butane -V
#convert from butane format to machineconfig format
butane multipath-butane-worker.bu -o ./multipath-machineconfig-worker.yaml
```
The resulting YAML looks like this
```yaml=
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-infinidat-multipath
spec:
config:
ignition:
version: 3.4.0
storage:
files:
- contents:
compression: gzip
source: data:;base64,H4sIAAAAAAAC/8ovKMnMzytWyClISwYT8RlJifGFpamlqfEpqQUlGbYmBpZmEJmc0jwUGUMjCwUumAGFOYlGFRUVCoU5RhW5iRWFWJSk5WUmg4n43MSKeIQKLkAAAAD//+cWcvGFAAAA
mode: 420
overwrite: true
path: /etc/modprobe.d/infinidat.conf
- contents:
compression: gzip
source: data:;base64,H4sIAAAAAAAC/7zPQUoDMRQA0L2nCH9ZUixVdPUX0UQItikksSAiIZ2fscU2Q6fOQmvuLl5gRBx7gscTt14vDCJEos9qHfNLAs7ulTVqhggHeorjj+cRcOYebtyj82qOCKttU70CZ8osj1qGpTJyYQsimDtttBQeOBPe2+O+S106P1TrRN02tQUhNznB2Q8s7cZ9pJwHp6wWs2/y4ur6crqa1KM/msNUI1FoY6ZmVxAmp4j+Uhym2e5DrOtN3ry9F4TpPz1Zv/kVAAD//3b/J7O8AgAA
mode: 420
overwrite: true
path: /etc/udev/rules.d/99-infinidat-queue.rules
- contents:
compression: gzip
source: data:;base64,H4sIAAAAAAAC/1SQsY7jMAxEe38Fof6ApLgPuMPhgDSpticYiY6JyJJDUckai/z7wrKzm+3IN2OLM4F7qtEKfHQAvaSAY40mE9lQwM1cXPfoAt/E8+pZ5zYC3DiFrOCO/w/Hw78/b67RSXOo3sAdUi9J/ub3jZMNeNZcJ0lnnHIUP4NrAE8zTir5xegH9hdWcFZ1xT2TVeUCu7YOpOFOyjhQCnFx7oFipecRksF976p4ZzkPBm6RVEyWcIuUMrYXlU1ncNfKlb8+GiWhZNj/3FGvG+pjLQPmhJGKYeC4tda0BUnGniSijRn2vxsPfMOYS2nMSWvJ5pfshSN7W5otrEvfv0xGht3ztxJP5C/gZBw5CNl2bmBjb7hGT1uZtbBir8IpxBkTjVye4qN7dJ8BAAD//9muB90BAgAA
mode: 420
overwrite: true
path: /etc/multipath.conf
kernelArguments:
- elevator=none
```
Upload the MachineConfig, which will reboot the node(s)
```bash
oc create -f multipath-machineconfig-master.yaml
```
## Installing the Infinidate CSI driver
### Grant additional permissions
```bash
#switch to project
oc project infinidat-csi
#add permissions
oc adm policy add-scc-to-user privileged -z infinidat-csi-operator-controller-manager
oc adm policy add-scc-to-user privileged -z infinidat-csi-operator-infinidat-csi-driver
oc adm policy add-scc-to-user privileged -z infinidat-csi-operator-infinidat-csi-node
oc adm policy add-scc-to-user privileged -z infinidat-csi-operator-infinidat-csi-controller
oc adm policy add-scc-to-user anyuid -z infinidat-csi-operator-controller-manager
oc adm policy add-scc-to-user anyuid -z infinidat-csi-operator-infinidat-csi-driver
oc adm policy add-scc-to-user anyuid -z infinidat-csi-operator-infinidat-csi-node
oc adm policy add-scc-to-user anyuid -z infinidat-csi-operator-infinidat-csi-controller
oc adm policy add-scc-to-user hostnetwork -z infinidat-csi-operator-controller-manager
oc adm policy add-scc-to-user hostnetwork -z infinidat-csi-operator-infinidat-csi-driver
oc adm policy add-scc-to-user hostnetwork -z infinidat-csi-operator-infinidat-csi-node
oc adm policy add-scc-to-user hostnetwork -z infinidat-csi-operator-infinidat-csi-controller
```
### Create the Secret???
Is the `Secret` created by the Custom Resource (`InfiniboxCsiDriver`) or should the `Secret` be created manually, and then set `skipCredentialsCreation: true` in the Custom Resource?
A separate secret must be defined within the Kubernetes cluster for every managed InfiniBox array. It must include an InfiniBox host name, administrator credentials, and optional CHAP authentication credentials (for iSCSI), all encoded in Base64.
To encode an entry:
```bash
$ echo -n infi0001.company.com | base64
aWJveDAwMDEuY29tcGFueS5jb20=
```
Sample secret file:
```yaml=
apiVersion: v1
kind: Secret
metadata:
name: infi0001-credentials
namespace: infinidat-csi
type: Opaque
data:
hostname: aWJveDAwMDEuY29tcGFueS5jb20=
node.session.auth.password: MC4wMDB1czA3Ym9mdGpv
node.session.auth.password_in: MC4wMDI2OHJ6dm1wMHI3
node.session.auth.username: aXFuLjIwMjAtMDYuY29tLmNzaS1kcml2ZXItaXNjc2kuaW5maW5pZGF0OmNvbW1vbmlu
node.session.auth.username_in: aXFuLjIwMjAtMDYuY29tLmNzaS1kcml2ZXItaXNjc2kuaW5maW5pZGF0OmNvbW1vbm91dA==
password: MTIzNDU2
username: azhzYWRtaW4=
```
### VolumeSnapshotClass
```yaml=
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: ibox-snapshotclass-demo-locking
namespace: infinidat-csi
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: infinibox-csi-driver
deletionPolicy: Delete
parameters:
lock_expires_at: "1 Hours"
csi.storage.k8s.io/snapshotter-secret-name: infinibox-creds
csi.storage.k8s.io/snapshotter-secret-namespace: infinidat-csi
```
### StorageProfile (for KubeVirt / OpenShift Virtualization)
https://docs.openshift.com/container-platform/4.15/virt/storage/virt-configuring-storage-profile.html
Could this StorageProfile be added to the `containerized-data-importer` repo?
https://github.com/kubevirt/containerized-data-importer/blob/main/doc/onboarding-storage-provisioners.md
https://github.com/kubevirt/containerized-data-importer/commit/64d0d1697dd97585879ca254c77873db62d55294
```yaml=
apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
name: ibox0001-fc-storageclass
spec:
cloneStrategy: "csi-clone" #or "snapshot"
claimPropertySets:
- accessModes:
- ReadWriteMany #or ReadWriteOnce
volumeMode: Block #or Filesystem
```
### Disconnected / proxied woes
Several Infinidat container images come from registries that were not a part of the proxies list of allowed destinations.
e.g. registry.k8s.io, gcr.io and their various mirror backends like pkgs.dev, amazonaws.com, googleapis.com, etc...