How to use GPUs with OKD 4.5 === ###### tags: `HowTo` `OKD` `GPU` `4.5` :::info - **Author:** Zvonko Kaiser - **Date:** April 22, 2020 - **Last modified:** Aug 17, 2020 - **References:** - OpenShift Commons Briefing Youtube: bit.ly/ocbgpu - OpenShift Commons Briefing Deck: bit.ly/GPUONOPENSHIFT ::: ## Using a MachineSet scale the cluster with a GPU Node ``` spec: replicas: 2 . . . credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: okd-v4-gpu-bsr6p-worker-profile instanceType: g4dn.xlarge kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-west-2a region: us-west-2 ``` ### Deploy the machines with the custom GPU MachineSet ``` $ oc create -f okd-gpu-worker.yaml $ oc get machineset NAME                   DESIRED   CURRENT   READY   AVAILABLE   AGE okd-v4-gpu-bsr. 2 2 ``` ## Deploy NFD from Github, will be part of OperatorHub in OKD ``` $ git clone https://github.com/openshift/cluster-nfd-operator $ cd cluster-nfd-operator && git checkout release-4.5 $ PULLPOLICY=Always make deploy ``` ### Verify that GPU nodes are labelled correctly ``` $ oc describe nodes | grep 10de # 0x10de PCI VENDOR ID NV feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-10de.present=true $ oc describe nodes | grep kernel feature.node.kubernetes.io/kernel-version.full=5.5.15-XXX feature.node.kubernetes.io/kernel-version.major=5 feature.node.kubernetes.io/kernel-version.minor=5 feature.node.kubernetes.io/kernel-version.revision=15 ``` ## Create a DriverContainer for F32, working with NVIDIA to add Fedora CoreOS support ``` $ git clone https://gitlab.com/zvonkok/driver $ cd driver && git checkout fedora $ cd fedora $ podman build -t --build-arg FEDORA_VERSION=32 driver:450.51.05-fedora32 . $ podman push driver:450.51.05-fedora32 <registry>/<repo>/driver:450.51.05-fedora32 ``` The registry repo and version combination is used later when one instantiates the NVIDIA GPU operator to pull the correct image. ## Update CRIO hooks_dir (https://bugzilla.redhat.com/show_bug.cgi?id=1853735) ``` $ oc debug node/<GPU-NODE> $ chroot /host # CHANGE /usr/share/containers/oci/hooks.d to /etc/containers/oci/hooks.d/ in /etc/crio/crio.conf # systemctl daemon-reload # systemctl restart crio ``` ## Deploy NVIDIA GPU Operator for now only helm ``` $ helm repo add nvidia https://nvidia.github.io/gpu-operator $ helm repo update # Create the namespace for housing the operator $ oc new-project gpu-operator $ helm install --devel nvidia/gpu-operator --set platform.openshift=true,operator.defaultRuntime=crio,driver.repository=<registry>/<repo>,driver.version=450.51.05,toolkit.version=1.0.2-ubi8,nfd.enabled=false --wait --generate-name ``` ### Verify that GPUs are enabled, one will see the extended resource GPU  ``` $ oc describe nodes | grep nvidia nvidia.com/gpu: 1 feature.node.kubernetes.io/pci-10de.present=true ``` ## Run a GPU workload (Multi-Node) ``` $ git clone https://github.com/kubeflow/mpi-operator $ cd mpi-operator $ oc create -f deploy/v1/mpi-operator.yaml $ oc create -f https://raw.githubusercontent.com/kubeflow/mpi-operator/master/examples/v1/tensorflow-benchmarks.yaml ``` ## Autoscaling GPU Nodes https://www.openshift.com/blog/simplifying-deployments-of-accelerated-ai-workloads-on-red-hat-openshift-with-nvidia-gpu-operator