--- title: Kubernetes installation with gpu operator tags: Kubernetes, GPU Operator description: Kubernetes installation --- # Kubernetes Installation k8s architecture: build a master with worker node. gpu operator components: nvidia-dcgm-exporter nvidia-container-toolkit nvidia-device-plugin gpu-feature-discovery ### Prerequisite: 1. Using kuberents to manage service. 2. Service must able to obtain gpu resources. ## Machine Detail |hardware|nodes|gpu| |--|--|--| |QuantaPlex|master node|X| |QuantaGrid|worker node|V| ### Software tools Version |software|version| |--|--| |containerd|v1.6.16| |kubectl|v1.26.1| |gpu operator|v22.9.1| ## Master Node ### Install Docker ``` $ curl https://get.docker.com | sh && sudo systemctl --now enable docker $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \ sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list $ sudo apt-get update $ sudo apt-get install -y nvidia-docker2 $ sudo systemctl restart docker ``` remove `config.toml` file ``` $ sudo rm /etc/containerd/config.toml ``` update containerd to newest version ``` $ sudo apt install containerd.io ``` ### Install Kubernetes ``` $ sudo apt-get update && sudo apt-get install -y apt-transport-https curl $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - $ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF $ sudo apt-get update && sudo apt-get install -y -q kubelet kubectl kubeadm ``` #### Initial Kubernetes ``` $ sudo swapoff -a $ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=latest $ mkdir -p $HOME/.kube \ && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config \ && sudo chown $(id -u):$(id -g) $HOME/.kube/config $ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml $ kubectl taint nodes --all node-role.kubernetes.io/control-plane- ``` ### Install Gpu Operater ``` $ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \ && chmod 700 get_helm.sh \ && ./get_helm.sh $ helm repo add nvidia https://helm.ngc.nvidia.com/nvidia \ && helm repo update $ helm install --wait --generate-name \ -n gpu-operator --create-namespace nvidia/gpu-operator ``` ## Worker Node ### Install Docker ``` $ curl https://get.docker.com | sh && sudo systemctl --now enable docker $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list $ sudo apt-get update $ sudo apt-get install -y nvidia-docker2 $ sudo systemctl restart docker ``` remove `config.toml` file ``` $ sudo rm /etc/containerd/config.toml ``` #### update containerd ``` $ sudo apt install containerd.io ``` ### Install Kubernetes ``` $ sudo apt-get update && sudo apt-get install -y apt-transport-https curl $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - $ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF $ sudo apt-get update && sudo apt-get install -y -q kubelet kubectl kubeadm $ sudo swapoff -a ``` #### Join Master Node ``` $ sudo kubeadm join 192.168.1.150:6443 --token ut33w7.dksaopdkopwkp31o --discovery-token-ca-cert-hash sha256:dsakopdsajodpsajdsoapdjsapodjsop1jo3p123u2o1podwpqud0qw9ud09wu90 ``` After join the master node, the master node will automatically run gpu operator pods on worker node to recognize gpu resources. --- ## 參考 * https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian * https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#install-k8s * https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html#install-nvidia-gpu-operator * https://hackmd.io/@sK-GgpcqTNWnutbxzOoG2g/ryvlVGjiv * https://developer.aliyun.com/article/652954 ## Thank you! :dash: You can find me on - GitHub: https://github.com/shaung08 - Email: a2369875@gmail.com
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up