# Installing kubernetes-csi/csi-driver-nfs on OpenShift with the Console We have an existing way to do this [with the cli](https://hackmd.io/JoGMdMJlQ_2H4vUuJpu0cw) but I wanted to show how we can also do it with the OpenShift Console ## Create a HelmRepository 1. Log into the OpenShift console and change to the **Developer Perspective** ![Developer Perspective](https://hackmd.io/_uploads/HJ762hNVC.png) 2. Click **Helm** on the left navigation bar ![helm](https://hackmd.io/_uploads/ByMeT2440.png) 3. Change Project to **kube-system** ![kube-system project](https://hackmd.io/_uploads/ByFDT344C.png) 4. Create a new Helm Chart Repository ![new repository](https://hackmd.io/_uploads/BJDmAnNNA.png) 5. Leave it as a Namespaced scoped type - Set `Name: csi-driver-nfs` - Optionally set `Display Name: csi-driver-nfs` - Set `URL: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts` ![create repository](https://hackmd.io/_uploads/HkJr0nENA.png) 6. Create the Repository 7. Click Helm again and Create a new Helm Release ![new release](https://hackmd.io/_uploads/S149R2EN0.png) 8. Filter by Chart Repositories and select `Csi Driver Nfs` and select the Csi Driver Nfs chart card to install ![create release](https://hackmd.io/_uploads/rJ5hA3N4R.png) 9. Click Create :::info These notes were created using version csi-driver-nfs version `4.6.0` ::: 10. Change the Chart version to `v4.6.0` 11. With the YAML view of the release options we need to change a few settings: ``` controller: ... runOnControlPlane: false # change to true ... externalSnapshotter: customResourceDefinitions: enabled: true # change to false enabled: false # change to true ``` 12. Click Create 13. All three circles will go dark blue as the Helm Release rolls out the containers to the cluster ![topology](https://hackmd.io/_uploads/rkGkyT4EA.png) :::success We are deploying into the `kube-system` project because the csi-driver-nfs needs elevated permissions on the OpenShift cluster which are provided by default in that privileged project ::: ## Create a StorageClass 1. Proceed to the Administrator Perspective in the ![admin perspective](https://hackmd.io/_uploads/HkmfJpE40.png) 2. Navigate to Storage -> StorageClasses and click Create StorageClass ![storageclass navigation](https://hackmd.io/_uploads/B1xmJpVVA.png) 3. Use the GUI to set: - Name: nfs-csi - Reclaim policy: Delete - Volume binding mode: Immediate - Provisioner: nfs.csi.k8s.io - Additional parameters: - server: `nfs-server.example.com` - share: `/example-dir` - subDir: `${pvc.metadata.namespace}-${pvc.metadata.name}` ![storageclass parameters](https://hackmd.io/_uploads/SyoSypVEA.png) If we need specific mount options like NFS v4 we can add a parameter: - mountOptions: `nvsvers=4.1` :::info `subDir` is optional, by default the template for subdirectories is `{nfs-server-address}#{sub-dir-name}#{share-name}` ::: ### Set Default StorageClass If we do not have one already we should set this StorageClass as default for the cluster 1. Click Actions -> Edit Annotations ![annotations](https://hackmd.io/_uploads/H1Y_J6NVC.png) 2. Set a new annotation - `storageclass.kubernetes.io/is-default-class: true` ![set annotation](https://hackmd.io/_uploads/SkutJaNNC.png) 3. Click Save Now you can provision a new PersistentVolumeClaim! ## Create a VolumeSnapshotClass Snapshots are supported with `csi-driver-nfs` after creating a `VolumeSnapshotClass`. Creating snapshots and clones can be slow because a snapshot request simply runs `tar -czf $DESTINATION_FILE $SOURCE_FOLDER`. More information can be found in the [community discussion](https://github.com/kubernetes-csi/csi-driver-nfs/issues/31#issuecomment-1462297896). If you want fast/instantaneous snapshots and clones, please use your storage vendor's CSI driver (e.g. NetApp, Dell/EMC, Portworx, OpenShift Data Foundation, etc...) A basic `VolumeSnapshotClass` only requires two pieces of information. ```yaml= --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass deletionPolicy: Delete driver: nfs.csi.k8s.io #Required to be nfs.csi.k8s.io metadata: name: csi-driver-nfs #Any name is OK ``` :::warning Virtual Machine snapshots have a 5 minute timeout by default. The GUI allows you to manually specify a longer time. Or you can create VM snapshots via YAML with `failureDeadline` set to a large number. ::: ```yaml= --- apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: snapshot-with-60-minute-timeout namespace: my-amazing-namespace spec: failureDeadline: 1h0m0s # Make this long enough to allow tar to finish source: apiGroup: kubevirt.io kind: VirtualMachine name: my-amazing-vm ```