# 2022-01-14 ## Loadbalance [scheduler-load.yaml](https://github.com/apache/incubator-yunikorn-k8shim/blob/master/deployments/scheduler/scheduler-load.yaml) ```yaml= apiVersion: apps/v1 kind: Deployment metadata: labels: app: yunikorn name: yunikorn-scheduler spec: replicas: 1 selector: matchLabels: app: yunikorn template: metadata: labels: app: yunikorn component: yunikorn-scheduler name: yunikorn-scheduler spec: serviceAccountName: yunikorn-admin containers: - name: yunikorn-scheduler-k8s image: apache/yunikorn:scheduler-latest resources: requests: cpu: 200m memory: 1Gi limits: cpu: 4 memory: 2Gi imagePullPolicy: IfNotPresent volumeMounts: - name: config-volume mountPath: /etc/yunikorn/ ports: - containerPort: 9080 - name: yunikorn-scheduler-web image: apache/yunikorn:web-latest imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 100Mi limits: cpu: 200m memory: 500Mi ports: - containerPort: 9889 volumes: - name: config-volume configMap: name: yunikorn-configs --- apiVersion: v1 kind: Service metadata: name: yunikorn-service labels: app: yunikorn-service spec: ports: - name: yunikorn-service port: 9889 - name: yunikorn-core port: 9080 selector: app: yunikorn type: LoadBalancer ``` ## Authorization [yunikorn-rbac.yaml](https://github.com/apache/incubator-yunikorn-k8shim/blob/master/deployments/scheduler/yunikorn-rbac.yaml) ```yaml= apiVersion: v1 kind: ServiceAccount metadata: name: yunikorn-admin namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: yunikorn-rbac subjects: - kind: ServiceAccount name: yunikorn-admin namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io ``` ## ```kubectl drain```, ```kubectl uncordon``` ```kubectl get node``` ```sh NAME STATUS ROLES AGE VERSION lab Ready control-plane,master 20d v1.23.1 lab1 Ready <none> 5d9h v1.23.1 lab3 Ready <none> 19d v1.23.1 lab4 Ready <none> 9d v1.23.1 lab5 Ready <none> 8d v1.23.1 ``` ### deploy yunikorn [Deploy to Kubernetes](https://yunikorn.apache.org/docs/developer_guide/deployment) ```kubectl get deployment``` ``` NAME READY UP-TO-DATE AVAILABLE AGE yunikorn-scheduler 1/1 1 1 5s ``` ### deploy nginx * deployment of a nginx image (an example on [incubator-yunikorn-k8shim](https://github.com/apache/incubator-yunikorn-k8shim)) * change replicaset * drain a working node that has a pod working on * specify which node to work on ### deployment of a nginx image [nginx.yaml: ](https://github.com/apache/incubator-yunikorn-k8shim/blob/master/deployments/examples/nginx/nginx.yaml) ```yaml= apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx applicationId: "nginx_2019_01_22_00001" queue: root.sandbox name: nginx spec: schedulerName: yunikorn containers: - name: nginx image: "nginx:1.11.1-alpine" resources: requests: cpu: "500m" memory: "1024M" ``` ``` $ ls nginx.yaml $ kubectl apply nginx.yaml deployment.apps/nginx created $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 4m8s yunikorn-scheduler 1/1 1 1 35m $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7b4896995-f7m79 1/1 Running 0 6m40s 172.17.0.2 lab4 <none> <none> yunikorn-scheduler-595f7c59b5-vqkbk 2/2 Running 0 40m 120.108.204.3 lab3 <none> <none> ``` ### change replicaset ```yaml= apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 4 # 1 -> 4 selector: matchLabels: app: nginx template: metadata: labels: app: nginx applicationId: "nginx_2019_01_22_00001" queue: root.sandbox name: nginx spec: schedulerName: yunikorn containers: - name: nginx image: "nginx:1.11.1-alpine" resources: requests: cpu: "500m" memory: "1024M" ``` ``` $ kubectl apply -f nginx.yaml deployment.apps/nginx configured $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7b4896995-bk6xg 1/1 Running 0 15m 172.17.0.2 lab3 <none> <none> nginx-7b4896995-f7m79 1/1 Running 0 27m 172.17.0.2 lab4 <none> <none> nginx-7b4896995-nmwld 1/1 Running 0 18m 172.17.0.3 lab5 <none> <none> nginx-7b4896995-p75pz 1/1 Running 0 18m 172.17.0.2 lab1 <none> <none> yunikorn-scheduler-595f7c59b5-vqkbk 2/2 Running 0 60m 120.108.204.3 lab3 <none> <none> ``` ### drain a working node that has a pod working on ``` $ kubectl drain lab1 --ignore-daemonsets node/lab1 already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-292zc, kube-system/weave-net-q2f7r evicting pod default/nginx-7b4896995-p75pz pod/nginx-7b4896995-p75pz evicted node/lab1 drained $ kubectl get node NAME STATUS ROLES AGE VERSION lab Ready control-plane,master 20d v1.23.1 lab1 Ready,SchedulingDisabled <none> 5d10h v1.23.1 lab3 Ready <none> 19d v1.23.1 lab4 Ready <none> 9d v1.23.1 lab5 Ready <none> 8d v1.23.1 $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7b4896995-bk6xg 1/1 Running 0 18m 172.17.0.2 lab3 <none> <none> nginx-7b4896995-f7m79 1/1 Running 0 29m 172.17.0.2 lab4 <none> <none> nginx-7b4896995-nmwld 1/1 Running 0 21m 172.17.0.3 lab5 <none> <none> nginx-7b4896995-v4499 1/1 Running 0 42s 172.17.0.3 lab3 <none> <none> # change from lab1 to lab3 yunikorn-scheduler-595f7c59b5-vqkbk 2/2 Running 0 63m 120.108.204.3 lab3 <none> <none> ``` * uncordon ```lab1``` ``` $ kubectl uncordon lab1 node/lab1 uncordoned $ kubectl get lab1 NAME STATUS ROLES AGE VERSION lab Ready control-plane,master 20d v1.23.1 lab1 Ready <none> 5d10h v1.23.1 lab3 Ready <none> 19d v1.23.1 lab4 Ready <none> 9d v1.23.1 lab5 Ready <none> 8d v1.23.1 ``` ### specify which node to work on move all the nginx pod to node ```lab1``` ``` $ kubectl describe node lab1 Name: lab1 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=lab1 # using this label to specify kubernetes.io/os=linux ... $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7b4896995-bk6xg 1/1 Running 0 18m 172.17.0.2 lab3 <none> <none> nginx-7b4896995-f7m79 1/1 Running 0 29m 172.17.0.2 lab4 <none> <none> nginx-7b4896995-nmwld 1/1 Running 0 21m 172.17.0.3 lab5 <none> <none> nginx-7b4896995-v4499 1/1 Running 0 42s 172.17.0.3 lab3 <none> <none> # change from lab1 to lab3 yunikorn-scheduler-595f7c59b5-vqkbk 2/2 Running 0 63m 120.108.204.3 lab3 <none> <none> ``` modify nginx.yaml: * [Deployment v1 apps](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#deployment-v1-apps) * [DeploymentSpec v1 apps](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#deploymentspec-v1-apps) * [PodTemplateSpec v1 core](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podtemplatespec-v1-core) * [PodSpec v1 core](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podspec-v1-core) * nodeSelector ```yaml= apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 4 selector: matchLabels: app: nginx template: metadata: labels: app: nginx applicationId: "nginx_2019_01_22_00001" queue: root.sandbox name: nginx spec: nodeSelector: # using nodeSelector kubernetes.io/hostname: lab1 schedulerName: yunikorn containers: - name: nginx image: "nginx:1.11.1-alpine" resources: requests: cpu: "500m" memory: "1024M" ``` ``` $ kubectl apply -f nginx.yaml deployment.apps/nginx configured $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-6c4bf5b456-8zpdg 1/1 Running 0 4s 172.17.0.5 lab1 <none> <none> nginx-6c4bf5b456-f6vpf 1/1 Running 0 4s 172.17.0.4 lab1 <none> <none> nginx-6c4bf5b456-jdmch 1/1 Running 0 6s 172.17.0.2 lab1 <none> <none> nginx-6c4bf5b456-l84xw 1/1 Running 0 6s 172.17.0.3 lab1 <none> <none> yunikorn-scheduler-595f7c59b5-vqkbk 2/2 Running 0 77m 120.108.204.3 lab3 <none> <none> ``` ## yunikorn core code [resource quota management](https://yunikorn.apache.org/docs/user_guide/resource_quota_management/) ![](https://yunikorn.apache.org/assets/images/queue-resource-quotas-02ec11ddedad1f2057bbc4d3ef1c900a.png)