--- tags: technical blogs --- # Experiment VMs in a Kubernetes world "Gartner predicts that 70% of organizations will run at least two or more containerized applications by 2023." To keep ahead of the competition, many organizations start to modernize their applications and migrate workloads to containers that are munch more agile, lightweight, and portable. In real-life practice, adapting to a new architecture is not that easy. Some of your applications may require dependency on legacy infrastructure, or modernization takes years to go. To ensure smooth running of existing workloads without any possibility of failure, you need to maintain the VMs that host your legacy applications while putting efforts into refactoring applications. All these definitely increase the burden on developers and slow down transformation. Is there a platform that can help us handle both container and VM workloads without the need to rip up everything already in place? The obvious and optimal answer is to use [KubeSphere Virtualization (KSV)](https://kubesphere.cloud/en/docs/ksv/01-introduction/01-what-is-ksv/). By deploying KSV on [KubeSphere](https://github.com/kubesphere/kubesphere#what-is-kubesphere), you can reimagine VMs in Kubernetes. ## About KSV KSV is a lightweight VM management platform built on top of KubeSphere. KSV supports deployment in single-node and multi-node modes and provides an enterprise-grade service of virtualization to meet business needs of individual and enterprise users in diverse scenarios. KSV can help you reduce hardware costs in early stages and allows you to create VMs based on business requirements. This way, you can develop and deploy applications in an efficient manner. ## Architecture On top of the cloud native architecture of Kubernetes, KSV abstracts away the complexities of the infrastructure and basic configuration. You can run KSV in development or testing environments of various types, from private clouds, public clouds, VMs, physical servers, to edge nodes. KSV provides a range of functional modules, such as physical nodes, networks, storage, VMs, images, disks, and security groups. The following figure shows the architecture. ![architecture](https://i.imgur.com/c8J2UcU.png) ## Experiment VMs in Kubernetes After having a basic concept of KSV, I'll walk you through how to install KSV on KubeSphere to experiment VMs in Kubernetes. ### Before you begin Before you install KSV, check out whether your server node meets the following prerequisites: * The server node on which KSV is installed must run on Linux, and the kernel version of Linux must be 4.0 or later. We recommend that you use one of the following OSs: Ubuntu 18.04, Ubuntu 20.04, CentOS 7.9, Unity Operating System (UOS), Kylin V10, and EulerOS. Query the OS version: `cat /etc/issue` * Make sure your server node meets the following conditions: <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> <col style={{width: "33%"}} /> </colgroup> <thead> <tr class="header"> <th class="tableblock halign-left valign-top">Hardware</th> <th class="tableblock halign-left valign-top">Minimum</th> <th class="tableblock halign-left valign-top">Recommended</th> </tr> </thead> <tbody> <tr class="odd"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>CPU cores</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>4 cores</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>8 cores</p> </div> </div></td> </tr> <tr class="even"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>Memory</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>8 GB</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>16 GB</p> </div> </div></td> </tr> <tr class="odd"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>System disk</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>100 GB</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>200 GB</p> </div> </div></td> </tr> </tbody> </table> * Query the number of CPU cores of your server: `cat /proc/cpuinfo | grep "processor" | sort | uniq | wc -l` * Query the memory size: `cat /proc/meminfo | grep MemTotal` * Query the available disk size: `df -hl` * The server node must have at least one disk that is unformatted and unpartitioned, or one unformatted partition. The minimum size of the disk or partition is 100 GB and the recommended size is 200 GB. Query the disk partition of the server node: `lsblk -f` * The server node must support virtualization. If a server node does not support virtualization, KSV runs in emulation mode. In this mode, more resources can be consumed, and all VM-based modules are unavailable. Query whether the server node supports virtualization. If the command output is empty, the server node does not support virtualization. * x86 `grep -E '(svm|vmx)' /proc/cpuinfo` * ARM64 `ls /dev/kvm` To obtain more information about prerequisites, see [official documentation](https://kubesphere.cloud/en/docs/ksv/02-quick-start/01-install-ksv-in-single-node-mode/). ### Step 1 Install dependencies The dependencies for **socat**, **conntrack**, **ebtables**, and **ipset** are required for your server node. If you have installed the dependencies on your server node, skip this step. If your server node runs on Ubuntu, run the following command to install the dependencies: ```bash sudo apt install socat conntrack ebtables ipset -y ``` If your server node runs on an OS of another type, replace **apt** with the actual software package management tool. ### Step 2 Deploy a Kubernetes cluster 1. Download KubeKey: ```bash curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.5 sh - ``` 2. Grant **kk** permissions to execute the binary file: ```bash sudo chmod +x kk ``` 3. Create a cluster configuration file: ```bash ./kk create config ``` 4. Edit file **config-sample.yaml**: ```bash vi config-sample.yaml ``` Sample code: ```bash apiVersion: kubekey.kubesphere.io/v1alpha2 kind: Cluster metadata: name: sample spec: hosts: - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"} - {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"} roleGroups: etcd: - node1 control-plane: - node1 worker: - node1 - node2 controlPlaneEndpoint: ## Internal loadbalancer for apiservers # internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" port: 6443 kubernetes: version: v1.23.10 clusterName: cluster.local autoRenewCerts: true containerManager: docker etcd: type: kubekey network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni multusCNI: enabled: false ``` 5. In **spec:hosts** of file **config-sample.yaml**, configure your server node. <table class="tableblock frame-all grid-all stretch"> <colgroup> <col style={{width: "25%"}} /> <col style={{width: "75%"}} /> </colgroup> <thead> <tr class="header"> <th class="tableblock halign-left valign-top">Parameter</th> <th class="tableblock halign-left valign-top">Description</th> </tr> </thead> <tbody> <tr class="odd"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>name</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>The custom name of the node.</p> </div> </div></td> </tr> <tr class="even"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>address</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>The node IP address for SSH login.</p> </div> </div></td> </tr> <tr class="odd"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>internalAddress</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>The internal IP address of the node.</p> </div> </div></td> </tr> <tr class="even"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>user</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>The username for SSH login. You must specify as user <strong>root</strong> or a user that has permissions to run <strong>sudo</strong> commands. If you leave this parameter empty, user <strong>root</strong> is specified by default.</p> </div> </div></td> </tr> <tr class="odd"> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>password</p> </div> </div></td> <td class="tableblock halign-left valign-top"><div class="content"> <div class="paragraph"> <p>The password for SSH login.</p> </div> </div></td> </tr> </tbody> </table> 6. In **spec:network:plugin** of file **config-sample.yaml**, set the value to **kubeovn**: Sample code: ```bash network: plugin: kubeovn ``` 7. In **spec:network:multusCNI:enabled** of file **config-sample.yaml**, set the value to **true**: Sample code: ```bash multusCNI: enabled: true ``` 8. Create the Kubernetes cluster: ```bash ./kk create cluster -f config-sample.yaml ``` ### Step 3 Deploy KubeSphere and KSV in hybrid mode 1. Run the following command to create an installer: ```bash cat <<EOF | kubectl apply -f - --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: clusterconfigurations.installer.kubesphere.io spec: group: installer.kubesphere.io versions: - name: v1alpha1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object x-kubernetes-preserve-unknown-fields: true status: type: object x-kubernetes-preserve-unknown-fields: true scope: Namespaced names: plural: clusterconfigurations singular: clusterconfiguration kind: ClusterConfiguration shortNames: - cc --- apiVersion: v1 kind: Namespace metadata: name: kubesphere-system --- apiVersion: v1 kind: ServiceAccount metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ks-installer rules: - apiGroups: - "" resources: - '*' verbs: - '*' - apiGroups: - apps resources: - '*' verbs: - '*' - apiGroups: - extensions resources: - '*' verbs: - '*' - apiGroups: - batch resources: - '*' verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiextensions.k8s.io resources: - '*' verbs: - '*' - apiGroups: - tenant.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - certificates.k8s.io resources: - '*' verbs: - '*' - apiGroups: - devops.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.coreos.com resources: - '*' verbs: - '*' - apiGroups: - logging.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - jaegertracing.io resources: - '*' verbs: - '*' - apiGroups: - storage.k8s.io resources: - '*' verbs: - '*' - apiGroups: - admissionregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - policy resources: - '*' verbs: - '*' - apiGroups: - autoscaling resources: - '*' verbs: - '*' - apiGroups: - networking.istio.io resources: - '*' verbs: - '*' - apiGroups: - config.istio.io resources: - '*' verbs: - '*' - apiGroups: - iam.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - notification.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - auditing.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - events.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - core.kubefed.io resources: - '*' verbs: - '*' - apiGroups: - installer.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - storage.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - security.istio.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.kiali.io resources: - '*' verbs: - '*' - apiGroups: - kiali.io resources: - '*' verbs: - '*' - apiGroups: - networking.k8s.io resources: - '*' verbs: - '*' - apiGroups: - kubeedge.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - types.kubefed.io resources: - '*' verbs: - '*' - apiGroups: - scheduling.k8s.io resources: - '*' verbs: - '*' - apiGroups: - kubevirt.io resources: - '*' verbs: - '*' - apiGroups: - cdi.kubevirt.io resources: - '*' verbs: - '*' - apiGroups: - network.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - virtualization.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - snapshot.storage.k8s.io resources: - '*' verbs: - '*' - apiGroups: - ceph.rook.io resources: - '*' verbs: - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ks-installer subjects: - kind: ServiceAccount name: ks-installer namespace: kubesphere-system roleRef: kind: ClusterRole name: ks-installer apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: ks-installer namespace: kubesphere-system labels: app: ks-install spec: replicas: 1 selector: matchLabels: app: ks-install template: metadata: labels: app: ks-install spec: serviceAccountName: ks-installer containers: - name: installer image: kubespheredev/ksv-installer:v1.6.0 imagePullPolicy: "Always" resources: limits: cpu: "1" memory: 1Gi requests: cpu: 20m memory: 100Mi volumeMounts: - mountPath: /etc/localtime name: host-time volumes: - hostPath: path: /etc/localtime type: "" name: host-time EOF ``` 2. Run the following command to create a cluster configuration file: ```bash cat <<EOF | kubectl apply -f - --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.3.1 spec: persistence: storageClass: "" authentication: jwtSecret: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: core: console: enabled: true enableMultiLogin: true port: 30880 type: NodePort redis: enabled: false enableHA: false volumeSize: 2Gi openldap: enabled: false volumeSize: 2Gi minio: volumeSize: 20Gi monitoring: endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 GPUMonitoring: enabled: false gpu: kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchHost: "" externalElasticsearchPort: "" alerting: enabled: false auditing: enabled: false devops: enabled: false jenkinsMemoryLim: 8Gi jenkinsMemoryReq: 4Gi jenkinsVolumeSize: 8Gi events: enabled: false logging: enabled: false logsidecar: enabled: true replicas: 2 metrics_server: enabled: false monitoring: storageClass: "" node_exporter: port: 9100 gpu: nvidia_dcgm_exporter: enabled: false multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none openpitrix: store: enabled: false servicemesh: enabled: false istio: components: ingressGateways: - name: istio-ingressgateway enabled: false cni: enabled: false edgeruntime: enabled: false kubeedge: enabled: false cloudCore: cloudHub: advertiseAddress: - "" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" iptables-manager: enabled: true mode: "external" gatekeeper: enabled: false virtualization: enabled: true expressNetworkMTU: 1300 useEmulation: false cpuAllocationRatio: 1 console: port: 30890 type: NodePort terminal: timeout: 600 EOF ``` 3. Run the following command to query logs: ```bash kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f ``` The deployment may take 10 to 30 minutes based on your hardware and network environment.If the following command output appears, the deployment is successful: ```bash ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.0.2:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 20xx-xx-xx xx:xx:xx ##################################################### ``` Obtain the IP address, admin account, and password from **Console**, **Account**, and **Password** parameters, and use a browser to log in to the KubeSphere web console. Depending on your network environment, you may need to configure traffic forwarding rules and enable port `30880` in the firewall. ## About KubeSphere KubeSphere is an open source container platform built on top Kubernetes with applications at its core. It provides full-stack IT automated operation and streamlined DevOps workflows. KubeSphere has been adopted by thousands of enterprises across the globe, such as Aqara, Sina, Benlai, China Taiping, Huaxia Bank, Sinopharm, WeBank, Geko Cloud, VNG Corporation and Radore. KubeSphere offers wizard interfaces and various enterprise-grade features for operation and maintenance, including Kubernetes resource management, [DevOps (CI/CD)](https://kubesphere.io/devops/), application lifecycle management, service mesh, multi-tenant management, [monitoring](https://kubesphere.io/observability/), logging, alerting, notification, storage and network management, and GPU support. With KubeSphere, enterprises are able to quickly establish a strong and feature-rich container platform. To obtain more information, visit https://kubesphere.io/.