John Call
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee
  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    3
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # Connecting OpenShift to NFS storage with csi-driver-nfs :::info Please note, this procedure is also [available via the GUI](https://hackmd.io/@kincl/HyjBm2V4R) ::: Connecting an NFS server, such as a common Linux server, to OpenShift can be a convenient way to provide storage for multi-pod (ReadWriteMany/RWX) Persistent Volumes. *Fancy NFS servers*, like NetApp or Dell/EMC Isilon, have their own Container Storage Interface (CSI) drivers and shouldn't use the csi-driver-nfs described in this document. Using the appropriate CSI driver for your storage provides additional features like efficient snapshots and clones. The goal of this document is to describe how to install the [csi-driver-nfs](https://github.com/kubernetes-csi/csi-driver-nfs) provisioner onto OpenShift. The procedure can be performed [using the Web Console](https://hackmd.io/t34YaR0WS7GzNKIXwko8Zg), but I'll document the Helm/command-line method here because it's easier to copy and paste. The process of configuring a Linux server to export NFS shares is [described in the Appendix below](#RHEL-as-an-NFS-server). **Please note:** The `ServiceAccounts` created by the official [Helm chart](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/README.md) require additional permissions in order to overcome OpenShift's default security policy and run the `Pods`. ## Overview I assume that you already have an NFS server available. If not, follow the instructions in the Appendix in order to [make a Linux server provide NFS services](#RHEL-as-an-NFS-server). Once an NFS server is available, the remaining steps are to: 1. [Make sure `helm` is installed](#Install-the-Helm-CLI-tool) 2. [Connect `helm` to the csi-driver-nfs chart repository](#Add-the-Helm-repo-and-list-available-charts) 3. [Install the chart with certain overrides](#Install-a-chart-release) 4. [Give additional permissions to the `ServiceAccounts`](#Grant-additional-permissions-to-the-ServiceAccounts) 5. [Create a StorageClass](#Create-a-StorageClass) ## Install the Helm CLI tool You must have Helm installed on your workstation before proceeding. Alternatively you can use the OpenShift's [Web Terminal Operator](https://www.redhat.com/en/blog/a-deeper-look-at-the-web-terminal-operator-1) to run all of the commands from the OpenShift Web Console. The Web Terminal Operator environment has `helm` and `oc` already installed. ```bash # Download the helm binary and save it to $HOME/bin/helm curl -L -o $HOME/bin/helm https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 # Allow helm to be executed chmod a+x $HOME/bin/helm # Confirm that helm is working by reporting the version helm version ``` ## Add the Helm repo and list available charts Helm uses the term "chart" when referring to the collection of resources that make up a Kubernetes application. Helm charts are published in repos / repositories. The csi-driver-nfs chart instructs OpenShift to create various resources, such as a `Deployment` that creates a controller `Pod` and a `DaemonSet` that creates a mount/unmount `Pod` on each server in the cluster. Two `ServiceAccounts` are also created in order to run the `Pods` with appropriate permissions. :::info These notes were created using version csi-driver-nfs version `4.11.0` on June 2025. ::: ```bash # Tell helm where to find the csi-driver-nfs chart repository helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts # Ask helm to list the repo contents. helm search repo -l csi-driver-nfs ``` ## Install a chart release Helm uses the term "release" when referring to an instance of a chart/application running in a Kubernetes cluster. A single chart can be installed many times into the same cluster. Each time its installed, a new release is created. **The csi-driver-nfs chart should only be installed once!** More info available at https://helm.sh/docs/intro/using_helm My example command below uses several `--set name=value` arguments to override certain default chart values. By overriding the default values I can: 1. Create a new namespace/project for the resources created by csi-driver-nfs 2. Allow the pods that create NFS subdirectories to run on the ControlPlane nodes 3. Have two NFS controller Pods running for high-availability 4. Deploy the optional CSI external snapshot controller 5. Control whether or not the Custom Resource Definitions (CRD) required for the external snapshot controller are installed (they can only be installed once) :::info OpenShift installs the external snapshot CRDs by default. Helm will fail and abort when it finds the existing CRDs because the CRDs weren't created (and labeled) by Helm. Its OK to disable the snapshot CRD creation when you see this error: > Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "volumesnapshots.snapshot.storage.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "csi-driver-nfs"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "csi-driver-nfs" ::: **Option 1** - Install everything, but don't try to recreate the external snapshot CRDs that OpenShift installs by default ```bash helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --version 4.11.0 \ --create-namespace \ --namespace csi-driver-nfs \ --set controller.runOnControlPlane=true \ --set controller.replicas=2 \ --set controller.strategyType=RollingUpdate \ --set externalSnapshotter.enabled=true \ --set externalSnapshotter.customResourceDefinitions.enabled=false ``` **Option 2** - Install on a Single Node Openshift (SNO) cluster. When there is only one node in the cluster, we don't need multiple replicas of the controller pod ```bash helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --version 4.11.0 \ --create-namespace \ --namespace csi-driver-nfs \ --set controller.runOnControlPlane=true \ --set externalSnapshotter.enabled=true \ --set externalSnapshotter.customResourceDefinitions.enabled=false ``` ## Grant additional permissions to the ServiceAccounts :::danger These commands allow the Pods to run as root with all permissions. Ideally specific permissions would be granted with all other permissions restricted. I started working on this in the Appendix, but never finished it. 😭 ::: ```bash oc adm policy add-scc-to-user privileged -z csi-nfs-node-sa -n csi-driver-nfs oc adm policy add-scc-to-user privileged -z csi-nfs-controller-sa -n csi-driver-nfs ``` ## Create a StorageClass Now that the csi-driver-nfs chart has been installed, and allowed to run with eleveated permissions, it's time to create a `StorageClass`. The `StorageClass` we create must reference the csi-driver-nfs provisioner and provides additional parameters which identify which NFS server and exported folder/share to use. An additional `subDir:` paramater is defined which controls the name of the folder that gets dynamically created when a Persistent Volume Claim is created. ```yaml --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-csi annotations: storageclass.kubernetes.io/is-default-class: true provisioner: nfs.csi.k8s.io parameters: server: nfs-server.example.com ### NFS server's IP/FQDN share: /example-dir ### NFS server's exported directory subDir: ${pvc.metadata.namespace}-${pvc.metadata.name}-${pv.metadata.name} ### Folder/subdir name template reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: True ``` If the NFS server is only v4 add to the StorageClass: ```yaml mountOptions: - nfsvers=4.1 ``` ## Create a VolumeSnapshotClass You must create a `VolumeSnapshotClass` if you want to create snapshots. :::danger You should know that snapshots and clones are slow! If you want fast snapshots and clones, please use your storage vendor's CSI driver (e.g. NetApp, Dell/EMC, Portworx, OpenShift Data Foundation, etc...) Creating a snapshot simply runs `tar -czf $DESTINATION_FILE $SOURCE_FOLDER`. More information can be found in the [community discussion](https://github.com/kubernetes-csi/csi-driver-nfs/issues/31#issuecomment-1462297896). ::: I've copied the YAML from the [upstream community files](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/snapshot/snapshotclass-nfs.yaml). ```yaml= --- apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass deletionPolicy: Delete driver: nfs.csi.k8s.io metadata: name: csi-nfs-snapclass ``` :::warning I've noticed that creating snapshots of Virtual Machines takes longer than the default timeout of 5 minutes. I was still able to create a snapshot by increasing the timeout in the `VirtualMachineSnapshot` YAML. ::: :::spoiler Click here to see the YAML ```yaml= apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: snapshot-with-60-minute-timeout namespace: my-amazing-app spec: failureDeadline: 1h0m0s source: apiGroup: kubevirt.io kind: VirtualMachine name: my-amazing-vm ``` ::: ## Proving that it works I find it helpful to have a default StorageClass. It's possible to have multiple default StorageClasses, but the behavior is undefined. The thought of undefined behavior scares me! 😱 Check that no other StorageClass has been marked as the default with ```bash oc get storageclass ``` You can make this StorageClass the default by adding an annotation after creating/applying the YAML below ```bash oc annotate storageclass/nfs-csi storageclass.kubernetes.io/is-default-class=true ``` :::success All done! The StorageClass has been created, and you can see it working by creating a sample PVC with this YAML ```yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-nfs namespace: csi-driver-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi volumeMode: Filesystem storageClassName: nfs-csi ### remove this line to test the default StorageClass ``` The PVC should become `Bound` in a few seconds... ```bash oc get pvc -n csi-driver-nfs NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-nfs Bound pvc-abcdef-12345-... 1Gi RWX nfs-csi 5s ``` ::: # Appendix ## Update the Image Registry's configuration to use shared NFS storage If you have recently installed OpenShift, and didn't provided storage configuration details for the internal Image Registry, you may be interested in updating the Image Registry's configuration to use the new NFS StorageClass. :::info This command will reconfigure the internal Image Registry. The updated configuration will: - Create the default Persistent Volume Claim from the default StorageClass - The default PVC requests a 100GB ReadWriteMany/RWX volume - Make the internal registry highly available by running 2 pods/replicas - Revert the management state to Managed if it had been marked as Removed ```bash oc patch config.imageregistry.operator.openshift.io/cluster --type=merge \ -p '{"spec":{"rolloutStrategy":"RollingUpdate","replicas":2,"managementState":"Managed","storage":{"pvc":{"claim":""}}}}' ``` ::: :::danger The command below will terminate the Image Registry pods and delete the default Persistant Volume Claim. PVCs that were created manually will not be deleted. Only use this if you're planning on changing the internal Image Registry's configuration. All data/images in the Image Registry will be lost! ```bash oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"managementState":"Removed"}}' ``` ::: ## RHEL as an NFS server Here is a an abbreviated set of commands to configure a RHEL8 server as an NFS server. ```bash # Install the RPM dnf install nfs-utils # Create an export directory with adequate permissions and SElinux labels mkdir -p /exports/openshift chmod 755 /exports/openshift chown -R nfsnobody:nfsnobody /exports/openshift semanage fcontext --add --type nfs_t "/exports/openshift(/.*)?" restorecon -R -v /exports/openshift # Add the directory to the list of exports echo "/exports/openshift *(insecure,no_root_squash,async,rw)" >> /etc/exports # Make sure the firewall isn't blocking anything firewall-cmd --add-service nfs \ --add-service mountd \ --add-service rpc-bind \ --permanent firewall-cmd --reload # Start the NFS server service (also starts following reboots) systemctl enable --now nfs-server # Check everything from the server's perspective systemctl status nfs-server exportfs -av showmount -e localhost ``` You can mount the export from another Linux system for testing purposes. ```bash mount nfs.example.com:/exports/openshift /mnt touch /mnt/test-file mkdir /mnt/test-dir ls -lR /mnt #confirm these folders and files are owned by root rm -rvf /mnt/test-* umount /mnt ``` ## Ubuntu as an NFS server :::danger Ubuntu 22.04 LTS ships with an NFS server configuration that rewrites the group list provided by NFS clients (OpenShift.) In order for Ubuntu to function with csi-driver-nfs and OpenShift you must disable the "`manage-gids=y`" portion of the `/etc/nfs.conf` file using these two commands. ```bash echo -e "[mountd]\nmanage-gids=n" | sudo tee /etc/nfs.conf.d/disable-mountd-manage-gids.conf sudo systemctl restart nfs-server nfs-idmapd ``` ::: ## Access Control Lists causing "permission denied" errors I noticed that a particular ACL / default ACL was causing permission denied errors from process in the Pod. Everything outside the Pod seemed to work fine though... ```bash [root@lenovo ~]# getfacl /data/nvme-raid5/guacamole-nfs-storageclass/ # file: . # owner: jcall # group: jcall user::rwx user:jcall:rwx group::r-x mask::rwx other::r-x default:user::rwx default:user:jcall:rwx default:group::r-x default:mask::rwx default:other::r-x [root@lenovo ~]# setfacl -b -k /data/nvme-raid5/guacamole-nfs-storageclass/ ``` ## Logs and debugging If you're curious and would like to learn more about how this Provisioner works or troubleshoot and debug failures you can get more information from inspecting the namespace events and pod logs with these commands. The logs below show three log groupings. The first group shows the events in the namespace when a PVC is created. The second grouping shows the logs from a PVC being created which results in a subdirectory being created on the NFS server. The third log grouping shows the logs when a Pod attempts to mount and use the storage. ```bash oc get events -n csi-driver-nfs & LAST SEEN TYPE REASON OBJECT MESSAGE 7m28s Normal ExternalProvisioning persistentvolumeclaim/test-nfs waiting for a volume to be created, either by external provisioner "nfs.csi.k8s.io" or manually created by system administrator 7m28s Normal Provisioning persistentvolumeclaim/test-nfs External provisioner is provisioning volume for claim "csi-driver-nfs/test-nfs" 7m24s Normal ProvisioningSucceeded persistentvolumeclaim/test-nfs Successfully provisioned volume pvc-5daf545e-1ca0-4285-ba91-64fd67133b00 oc logs -f -l app=csi-nfs-controller -n csi-driver-nfs & I1114 18:12:23.858858 1 controller.go:1366] provision "csi-driver-nfs/new450" class "nfs-csi": started I1114 18:12:23.859290 1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"csi-driver-nfs", Name:"new450", UID:"70e4a89e-571c-49e3-adf3-3264c6dc553d", APIVersion:"v1", ResourceVersion:"150582625", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "csi-driver-nfs/new450" I1114 18:12:24.163417 1 controller.go:923] successfully created PV pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d for PVC new450 and csi volume name rhdata6.dota-lab.iad.redhat.com#data/hdd/ocp-vmware#pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d## I1114 18:12:24.163682 1 controller.go:1449] provision "csi-driver-nfs/new450" class "nfs-csi": volume "pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d" provisioned I1114 18:12:24.163752 1 controller.go:1462] provision "csi-driver-nfs/new450" class "nfs-csi": succeeded I1114 18:12:24.178981 1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"csi-driver-nfs", Name:"new450", UID:"70e4a89e-571c-49e3-adf3-3264c6dc553d", APIVersion:"v1", ResourceVersion:"150582625", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d oc logs -f -l app=csi-nfs-node -c nfs -n csi-driver-nfs & I1114 18:23:17.540284 1 utils.go:107] GRPC call: /csi.v1.Node/NodePublishVolume I1114 18:23:17.540327 1 utils.go:108] GRPC request: {"target_path":"/var/lib/kubelet/pods/ec96ea93-a4d9-4e0b-86a4-8768b324242e/volumes/kubernetes.io~csi/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d","csi.storage.k8s.io/pvc/name":"new450","csi.storage.k8s.io/pvc/namespace":"csi-driver-nfs","server":"rhdata6.dota-lab.iad.redhat.com","share":"/data/hdd/ocp-vmware","storage.kubernetes.io/csiProvisionerIdentity":"1699985041046-4693-nfs.csi.k8s.io","subdir":"pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d"},"volume_id":"rhdata6.dota-lab.iad.redhat.com#data/hdd/ocp-vmware#pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d##"} I1114 18:23:17.540807 1 nodeserver.go:123] NodePublishVolume: volumeID(rhdata6.dota-lab.iad.redhat.com#data/hdd/ocp-vmware#pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d##) source(rhdata6.dota-lab.iad.redhat.com:/data/hdd/ocp-vmware/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d) targetPath(/var/lib/kubelet/pods/ec96ea93-a4d9-4e0b-86a4-8768b324242e/volumes/kubernetes.io~csi/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d/mount) mountflags([]) I1114 18:23:17.541201 1 mount_linux.go:245] Detected OS without systemd I1114 18:23:17.541229 1 mount_linux.go:220] Mounting cmd (mount) with arguments (-t nfs rhdata6.dota-lab.iad.redhat.com:/data/hdd/ocp-vmware/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d /var/lib/kubelet/pods/ec96ea93-a4d9-4e0b-86a4-8768b324242e/volumes/kubernetes.io~csi/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d/mount) I1114 18:23:19.898894 1 nodeserver.go:140] skip chmod on targetPath(/var/lib/kubelet/pods/ec96ea93-a4d9-4e0b-86a4-8768b324242e/volumes/kubernetes.io~csi/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d/mount) since mountPermissions is set as 0 I1114 18:23:19.898932 1 nodeserver.go:142] volume(rhdata6.dota-lab.iad.redhat.com#data/hdd/ocp-vmware#pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d##) mount rhdata6.dota-lab.iad.redhat.com:/data/hdd/ocp-vmware/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d on /var/lib/kubelet/pods/ec96ea93-a4d9-4e0b-86a4-8768b324242e/volumes/kubernetes.io~csi/pvc-70e4a89e-571c-49e3-adf3-3264c6dc553d/mount succeeded I1114 18:23:19.898945 1 utils.go:114] GRPC response: {} ``` :::spoiler List of TODO items ## TODO: Create a customized SecurityContextConstraint ADDITIONAL TESTING REQUIRED HERE ``` --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: csi-driver-nfs allowHostDirVolumePlugin: true allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: true allowPrivilegeEscalation: true allowedCapabilities: - SYS_ADMIN runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:openshift:scc:csi-driver-nfs rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - csi-driver-nfs verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:openshift:scc:csi-driver-nfs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:csi-driver-nfs subjects: - kind: ServiceAccount name: csi-nfs-node-sa namespace: csi-driver-nfs - kind: ServiceAccount name: csi-nfs-controller-sa namespace: csi-driver-nfs ``` ::: ## Override image source ```bash= helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --version v4.9.0 \ --create-namespace \ --namespace csi-driver-nfs \ --set controller.runOnControlPlane=true \ --set controller.replicas=2 \ --set controller.strategyType=RollingUpdate \ --set externalSnapshotter.enabled=true \ --set externalSnapshotter.customResourceDefinitions.enabled=false \ --set image.baseRepo=quay.io/jcall \ --set image.nfs.repository=quay.io/jcall/sig-storage/nfsplugin \ --set image.nfs.tag=v4.9.0 \ --set image.csiProvisioner.repository=quay.io/jcall/sig-storage/csi-provisioner \ --set image.csiProvisioner.tag=v5.0.2 \ --set image.csiSnapshotter.repository=quay.io/jcall/sig-storage/csi-snapshotter \ --set image.csiSnapshotter.tag=v8.0.1 \ --set image.livenessProbe.repository=quay.io/jcall/sig-storage/livenessprobe \ --set image.livenessProbe.tag=v2.13.1 \ --set image.nodeDriverRegistrar.repository=quay.io/jcall/sig-storage/csi-node-driver-registrar \ --set image.nodeDriverRegistrar.repositorytag=v2.11.1 \ --set image.externalSnapshotter.repository=quay.io/jcall/sig-storage/snapshot-controller \ --set image.externalSnapshotter.tag=v8.0.1 ```

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully