vSphere VM Service初体验 # 前言 在最新的VMware vSphere with Tanzu 7.0 U2a中,VMware终于实现了通过声明式命令创建VM的能力,把vSphere with Tanzu的理念落地得更加完整,在VM+容器这种复合型应用的创建过程中,开发人员可以通过Yaml文件直接发布虚拟机以及容器应用,真正做到IaC(Infrastructure as Code)。 ![](https://i.imgur.com/7eQZHBz.png) 实际上,在7.0 U2a之前,是有办法在namespace中创建VM的,只是该方法有点黑科技的意味,并且不受官方支持,参考该链接:[https://www.virten.net/2021/04/create-virtual-machines-in-vsphere-with-tanzu-using-kubectl/](https://www.virten.net/2021/04/create-virtual-machines-in-vsphere-with-tanzu-using-kubectl/) 。 大致方法是: 1. 创建内容库,把虚拟机镜像的OVF文件导入,这里对虚拟机镜像没有要求(因为本来就不是官方支持的); 2. SSH到主管集群的ControlVM(Master Node)中,使用kubectl修改vmoperator把上一步创建的内容库加到主管集群中(正常配置中一个主管集群只能对接一个内容库); 3. 添加一个clusterrole使之对vm/网络有create/update/delete等的verb,并binding给一个用户; 4. 最后用这个用户登陆主管集群,就可以通过vmoperator提供的yaml参数在NS中创建虚拟机。 此方法仅用于炫技,不作为生产使用。下面将按官方文档进行操作。 # 按官方文档使用VM Service 官方文档链接:https://docs.vmware.com/cn/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-F81E3535-C275-4DDE-B35F-CE759EA3B4A0.html ![](https://i.imgur.com/yszzXxJ.png) 使用VM Service时基本按上图所示,从右往左一步步配置。 ## 环境说明 ![](https://i.imgur.com/jZSlfED.png) ![](https://i.imgur.com/OQZUaJ8.png) vCenter/ESXi均为7.0 U2a. 网络部分采用NSX-T,版本:3.1.2 ## 内容库及镜像 ![](https://i.imgur.com/NsBcOju.png) 创建一个本地的内容库,因为目前只支持把OVF镜像导入内容库,不能通过链接同步下来 ![](https://i.imgur.com/PGMkQe7.png) [https://marketplace.cloud.vmware.com/](https://marketplace.cloud.vmware.com/) 上VMware的marketplace网站搜索“vm service”,可见目前只支持ubuntu和centos两个镜像,其中centos是8版本,ubuntu没试,估计不是18就是20了,还会不断增加新镜像。把镜像下载到本地再导入到上一步创建的内容库中。 ![](https://i.imgur.com/7yiQlRJ.png) 这个centos8的镜像大概1.65G,由VMware封装过的,目前不支持BYO的镜像。 ## 查看 VM Class等信息 VM Class = VM规格,大中小之类的 VM Image = 内容库里的镜像资源 Storge Class = 赋予主管集群的存储策略 ![](https://i.imgur.com/TZyRS1S.png) 在Workload Management中多了个Services的Tab,里面可以查看VM Class,提供了16种默认规格,当然也可以创建自定义的规格。 ![](https://i.imgur.com/iMOLdZl.png) 在每个需要VM Service的NameSpace中,可以选择所需的VM Class,当然也可以选取全部,在这里只选取其中两种规格。 ![](https://i.imgur.com/6dIgRKH.png) 当然也需要在NameSpace中选择前面创建的内容库,以获得VM镜像。 配置完成后,可以登陆到Supervisor Cluster中检验。 ``` [root@cli-vm ~]# kubectl get vmclass NAME CPU MEMORY AGE best-effort-2xlarge 8 64Gi 2d best-effort-4xlarge 16 128Gi 2d best-effort-8xlarge 32 128Gi 2d best-effort-large 4 16Gi 2d best-effort-medium 2 8Gi 2d best-effort-small 2 4Gi 2d best-effort-xlarge 4 32Gi 2d best-effort-xsmall 2 2Gi 2d guaranteed-2xlarge 8 64Gi 2d guaranteed-4xlarge 16 128Gi 2d guaranteed-8xlarge 32 128Gi 2d guaranteed-large 4 16Gi 2d guaranteed-medium 2 8Gi 2d guaranteed-small 2 4Gi 2d guaranteed-xlarge 4 32Gi 2d guaranteed-xsmall 2 2Gi 2d [root@cli-vm ~]# kubectl -n ns01 get vmclassbinding NAME VIRTUALMACHINECLASS AGE best-effort-small best-effort-small 47h best-effort-xsmall best-effort-xsmall 47h ``` 查询刚才上传到内容库中的镜像 ``` [root@cli-vm ~]# kubectl get vmimage NAME VERSION OSTYPE FORMAT AGE centos-stream-8-vmservice-v1alpha1-1619529007339 centos8_64Guest ovf 47h ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd v1.16.8+vmware.1-tkg.3.60d2ffd vmwarePhoton64Guest ovf 47h ob-16466772-photon-3-k8s-v1.17.7---vmware.1-tkg.1.154236c v1.17.7+vmware.1-tkg.1.154236c vmwarePhoton64Guest ovf 47h ob-16545581-photon-3-k8s-v1.16.12---vmware.1-tkg.1.da7afe7 v1.16.12+vmware.1-tkg.1.da7afe7 vmwarePhoton64Guest ovf 47h ob-16551547-photon-3-k8s-v1.17.8---vmware.1-tkg.1.5417466 v1.17.8+vmware.1-tkg.1.5417466 vmwarePhoton64Guest ovf 2d ob-16897056-photon-3-k8s-v1.16.14---vmware.1-tkg.1.ada4837 v1.16.14+vmware.1-tkg.1.ada4837 vmwarePhoton64Guest ovf 47h ob-16924026-photon-3-k8s-v1.18.5---vmware.1-tkg.1.c40d30d v1.18.5+vmware.1-tkg.1.c40d30d vmwarePhoton64Guest ovf 47h ob-16924027-photon-3-k8s-v1.17.11---vmware.1-tkg.1.15f1e18 v1.17.11+vmware.1-tkg.1.15f1e18 vmwarePhoton64Guest ovf 47h ob-17010758-photon-3-k8s-v1.17.11---vmware.1-tkg.2.ad3d374 v1.17.11+vmware.1-tkg.2.ad3d374 vmwarePhoton64Guest ovf 2d ob-17332787-photon-3-k8s-v1.17.13---vmware.1-tkg.2.2c133ed v1.17.13+vmware.1-tkg.2.2c133ed vmwarePhoton64Guest ovf 2d ob-17419070-photon-3-k8s-v1.18.10---vmware.1-tkg.1.3a6cd48 v1.18.10+vmware.1-tkg.1.3a6cd48 vmwarePhoton64Guest ovf 47h ob-17654937-photon-3-k8s-v1.18.15---vmware.1-tkg.1.600e412 v1.18.15+vmware.1-tkg.1.600e412 vmwarePhoton64Guest ovf 47h ob-17658793-photon-3-k8s-v1.17.17---vmware.1-tkg.1.d44d45a v1.17.17+vmware.1-tkg.1.d44d45a vmwarePhoton64Guest ovf 47h ob-17660956-photon-3-k8s-v1.19.7---vmware.1-tkg.1.fc82c41 v1.19.7+vmware.1-tkg.1.fc82c41 vmwarePhoton64Guest ovf 47h ob-17861429-photon-3-k8s-v1.20.2---vmware.1-tkg.1.1d4f79a v1.20.2+vmware.1-tkg.1.1d4f79a vmwarePhoton64Guest ovf 47h ob-18035533-photon-3-k8s-v1.18.15---vmware.1-tkg.2.ebf6117 v1.18.15+vmware.1-tkg.2.ebf6117 vmwarePhoton64Guest ovf 2d ob-18035534-photon-3-k8s-v1.19.7---vmware.1-tkg.2.f52f85a v1.19.7+vmware.1-tkg.2.f52f85a vmwarePhoton64Guest ovf 47h ob-18037317-photon-3-k8s-v1.20.2---vmware.1-tkg.2.3e10706 v1.20.2+vmware.1-tkg.2.3e10706 vmwarePhoton64Guest ovf 47h ``` 这里看到的vmimage当然也包含了初始化部署vSphere with Tanzu时同步过来的VMware K8s发行版镜像信息,其中第一行便是前面导入内容库的CentOS镜像。 ``` [root@cli-vm ~]# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE vwk-storage-policy csi.vsphere.vmware.com Delete Immediate true 47h ``` Storage Class在初始化部署vSphere with Tanzu时就要用到,这里仅做了一个存储策略。 ## 执行通过VM Service部署VM VMware官方文档有提供一个简单的部署VM的Yaml范例(基于vSphere VDS非NSX-T),见https://docs.vmware.com/cn/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-5D254A77-AB6B-40AB-AB27-1AE6A917DC52.html , 由于我的环境是NSX-T所以修改如下: ``` apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachineService metadata: name: svc-justvm namespace: ns01 spec: selector: app: just-vm type: LoadBalancer ports: - name: ssh port: 22 protocol: TCP targetPort: 22 --- apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachine metadata: name: just-vm namespace: ns01 labels: app: just-vm spec: imageName: centos-stream-8-vmservice-v1alpha1-1619529007339 className: best-effort-xsmall powerState: poweredOn storageClass: vwk-storage-policy networkInterfaces: - networkType: nsx-t ``` 因为在NSX-T环境下通过VM Service部署的VM如同K8s Pod一样会从Pod CIDR(需SNAT)中获取地址,因此需要配置LB Service才能供外网访问,本例中通过LB Service发布22端口用于SSH测试。 ``` [root@cli-vm tanzu]# kubectl apply -f just-vm.yaml virtualmachineservice.vmoperator.vmware.com/svc-just-vm created virtualmachine.vmoperator.vmware.com/just-vm created ``` ![](https://i.imgur.com/TSAeoO2.png) 使用kubectl apply命令创建相应的对象,vCenter已开始通过OVF模版部署VM。 稍等片刻,可以看到VM和服务均已部署成功,并且LB获得从主管集群配置的Ingress CIDR中分配的外网IP地址。 ``` [root@cli-vm tanzu]# kubectl get vm,svc NAME POWERSTATE AGE virtualmachine.vmoperator.vmware.com/just-vm poweredOn 9m1s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/svc-just-vm LoadBalancer 10.96.2.154 192.168.130.34 22:32620/TCP 12m ``` ![](https://i.imgur.com/IvTJLBc.png) 从vSphere的角度看,在vSphere with Tanzu的NameSpace中创建出一个期望的VM。 如果仔细观察这个VM,会发现在vSphere Client的UI上不能打开console、不能修改配置、不能添加设备、不能修改电源状态、不能打快照、不能做vMotion... 看着这些变化可能会让我们传统的VI Admin略有不爽,可以理解,毕竟从自家门户看到的东西居然啥都动不了。 我觉得应该从这个角度理解问题:vSphere with Tanzu是针对客户从传统基于VM的应用部署迈向云原生应用提供一个统一的应用平台,Namespace就是提供给开发人员施展拳脚的小天地,而不可变基础架构(Immutable Infrastructure)本来就是云原生应用体系中的一个重要属性,因此,这个通过VM Service在Namespace中创建的VM也应该符合不可变基础架构的特性,不应该在vSphere Client中直接修改的。巨变的环境中,大家都需要慢慢适应的... ![](https://i.imgur.com/OAQ3oKw.png) 虽然说不能动,但毕竟是运维团队的权利范围中,管理能力一点不能少,如图所示,VI Admin可以充分了解虚拟机的运行状态和资源利用,随时和开发人员互动,毕竟最懂基础架构的还是咱们Infra运维团队 官方文档提供的内容基本到此为止,确实能通过yaml文件创建VM了,最后试一下SSH这台VM,毕竟要连上去才能做下一步的无论是安装软件还是开发代码的操作。打开Putty试一下。 ![](https://i.imgur.com/Du4gayK.png) 这里可以看到几点: 1. 虚拟机的已经在监听22端口; 2. 模版中内置了一个缺省用户“cloud-user”,但不知道密码; 3. 模版中disable了密码验证,至少需要密钥验证。 所以此时我们只是通过K8s的方式创建了一个无法访问的VM对象,真正要使用的话还需要继续做配置。 # 实际使用VM Service 观察虚拟机模版的结构,会看到一些预设好的字段: ``` [root@cli-vm tanzu]# kubectl get vmimage centos-stream-8-vmservice-v1alpha1-1619529007339 -o jsonpath='{.spec}' | jq { "imageSourceType": "Content Library", "osInfo": { "type": "centos8_64Guest", "version": "8" }, "ovfEnv": { "hostname": { "default": "centosguest", "key": "hostname", "type": "string" }, "instance-id": { "default": "id-ovf", "key": "instance-id", "type": "string" }, "password": { "key": "password", "type": "string" }, "public-keys": { "key": "public-keys", "type": "string" }, "seedfrom": { "key": "seedfrom", "type": "string" }, "user-data": { "key": "user-data", "type": "string" } }, "productInfo": { "product": "Centos Stream 8 (64-bit) For VMware VM Service" }, "type": "ovf" } ``` 比如在ovfEnv下面有“public-keys”,显然可以导入公钥,另外“user-data”字段则是会利用cloud-init做操作系统的定制化以及应用部署自动化,后面会讲到。 ## 创建一个至少能SSH的VM 我们先利用“public-keys”做一个简单的演示,使VM可以真正的用起来。 这里需要使用ConfigMap进行配置项的导入。 ``` apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachineService metadata: name: svc-vm-with-pkey namespace: ns01 spec: selector: app: vm-with-pkey type: LoadBalancer ports: - name: ssh port: 22 protocol: TCP targetPort: 22 --- apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachine metadata: name: vm-with-pkey namespace: ns01 labels: app: vm-with-pkey spec: imageName: centos-stream-8-vmservice-v1alpha1-1619529007339 className: best-effort-xsmall powerState: poweredOn storageClass: vwk-storage-policy networkInterfaces: - networkType: nsx-t vmMetadata: configMapName: vm-with-pkey transport: OvfEnv --- apiVersion: v1 kind: ConfigMap metadata: name: vm-with-pkey namespace: ns01 data: hostname: centos-ssh password: VMware1! public-keys: ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAy6mdraLUQZqtO/jL7fK7/fPPO3xhwRooDgLy3UWCp99+nUVenbEh9EiJQ83IFBbfkOS5bvH34DR7g1hxwP/qm741YmeatdZQ76fdZzFUOozhhpBgXNX65oNMYpQyTqy3ix5t2A7T8Ilp4VKMXVBVg9V0dNO8S7dy7zRM2ic4WyvlQaovBPnTGkgcoJE7AM/4psMxvAfP6lnnxWP9USlBPh8QsPn5Kp7UQZa7/2jby/E/SP5ckVcpXdLVHyio643hXvAeyUqBxyPUb/XAXOMK9M+XMUdK4slKNw0RNmeUORRNCcyj2Mhrvm228ZVhbpawUUOnFrP0ggu6WqufePesTw== rsa-key-20210622 ``` 这个例子中,我们按OVF模版中的结构通过ConfigMap写入hostname/password(cloud-user的)/public-keys,并且是写到ovfEnv字段写,顺便把主机名设定了。再部署一次看看。 ``` [root@cli-vm tanzu]# kubectl apply -f vm-with-pkey.yaml virtualmachineservice.vmoperator.vmware.com/svc-vm-with-pkey created virtualmachine.vmoperator.vmware.com/vm-with-pkey created configmap/vm-with-pkey created [root@cli-vm tanzu]# kubectl get vm,svc NAME POWERSTATE AGE virtualmachine.vmoperator.vmware.com/just-vm poweredOn 79m virtualmachine.vmoperator.vmware.com/vm-with-pkey poweredOn 6m51s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/svc-just-vm LoadBalancer 10.96.2.154 192.168.130.34 22:32620/TCP 82m service/svc-vm-with-pkey LoadBalancer 10.96.1.233 192.168.130.35 22:32172/TCP 6m49s ``` 打开Putty,配置好用户名以及私钥,再次发起SSH连接: ![](https://i.imgur.com/njhioLQ.png) 可以看到SSH连接上了,并且还很贴心的让用户修改密码。修改密码后,再次SSH就畅通无阻了。 ![](https://i.imgur.com/zxXSgpQ.png) ![](https://i.imgur.com/ut1VO4l.png) 对比一下前面的VM, 会发现虚拟机名字也正确的更新了,不设定hostname的话只会使用缺省的名字“centosguest”。 ## 使用Cloud-init提供更多配置 前面的VM可以访问了,但基本还是个空白的VM,用户还要按需配置操作系统,如添加用户、用户组之类的,还要安装软件等,这些工作可以说是重复性劳动,可以通过自动化的方式进行。这里采用cloud-init来实现自动配置及应用部署。 VM Service也是通过ConfigMap把Cloud-init的配置导入虚拟机中,这里要注意的是,必须把cloud-init配置转换成base64格式才能放入ConfigMap中。 ``` [root@cli-vm tanzu]# cat centos-user-data #cloud-config chpasswd: list: | centos:VMware1! expire: false groups: - docker users: - default - name: centos ssh-authorized-keys: - ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAy6mdraLUQZqtO/jL7fK7/fPPO3xhwRooDgLy3UWCp99+nUVenbEh9EiJQ83IFBbfkOS5bvH34DR7g1hxwP/qm741YmeatdZQ76fdZzFUOozhhpBgXNX65oNMYpQyTqy3ix5t2A7T8Ilp4VKMXVBVg9V0dNO8S7dy7zRM2ic4WyvlQaovBPnTGkgcoJE7AM/4psMxvAfP6lnnxWP9USlBPh8QsPn5Kp7UQZa7/2jby/E/SP5ckVcpXdLVHyio643hXvAeyUqBxyPUb/XAXOMK9M+XMUdK4slKNw0RNmeUORRNCcyj2Mhrvm228ZVhbpawUUOnFrP0ggu6WqufePesTw== rsa-key-20210622 sudo: ALL=(ALL) NOPASSWD:ALL groups: sudo, docker shell: /bin/bash network: version: 2 ethernets: ens192: dhcp4: true ``` cloud-init的配置文件是由#cloud-config开头的,这个配置文件会添加用户、添加组并把用户加入组、为新用户提供public-key、设置免密sudo等。 需要把cloud-init配置文件转码才能使用: ``` [root@cli-vm tanzu]# cat centos-user-data |base64 -w0 I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgIGxpc3Q6IHwKICAgICAgY2VudG9zOlZNd2FyZTEhCiAgICBleHBpcmU6IGZhbHNlCmdyb3VwczoKICAtIGRvY2tlcgp1c2VyczoKICAtIGRlZmF1bHQKICAtIG5hbWU6IGNlbnRvcwogICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgLSBzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFCSlFBQUFRRUF5Nm1kcmFMVVFacXRPL2pMN2ZLNy9mUFBPM3hod1Jvb0RnTHkzVVdDcDk5K25VVmVuYkVoOUVpSlE4M0lGQmJma09TNWJ2SDM0RFI3ZzFoeHdQL3FtNzQxWW1lYXRkWlE3NmZkWnpGVU9vemhocEJnWE5YNjVvTk1ZcFF5VHF5M2l4NXQyQTdUOElscDRWS01YVkJWZzlWMGROTzhTN2R5N3pSTTJpYzRXeXZsUWFvdkJQblRHa2djb0pFN0FNLzRwc014dkFmUDZsbm54V1A5VVNsQlBoOFFzUG41S3A3VVFaYTcvMmpieS9FL1NQNWNrVmNwWGRMVkh5aW82NDNoWHZBZXlVcUJ4eVBVYi9YQVhPTUs5TStYTVVkSzRzbEtOdzBSTm1lVU9SUk5DY3lqMk1ocnZtMjI4WlZoYnBhd1VVT25GclAwZ2d1NldxdWZlUGVzVHc9PSByc2Eta2V5LTIwMjEwNjIyCiAgICBzdWRvOiBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBncm91cHM6IHN1ZG8sIGRvY2tlcgogICAgc2hlbGw6IC9iaW4vYmFzaApuZXR3b3JrOgogIHZlcnNpb246IDIKICBldGhlcm5ldHM6CiAgICAgIGVuczE5MjoKICAgICAgICAgIGRoY3A0OiB0cnVlCg==[root@cli-vm tanzu]# ``` 然后就可以把全部内容copy/paste到ConfigMap中使用了。 ``` apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachineService metadata: name: svc-vm-cloudinit-conf namespace: ns01 spec: selector: app: vm-cloudinit-conf type: LoadBalancer ports: - name: ssh port: 22 protocol: TCP targetPort: 22 --- apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachine metadata: name: vm-cloudinit-conf namespace: ns01 labels: app: vm-cloudinit-conf spec: imageName: centos-stream-8-vmservice-v1alpha1-1619529007339 className: best-effort-xsmall powerState: poweredOn storageClass: vwk-storage-policy networkInterfaces: - networkType: nsx-t vmMetadata: configMapName: vm-with-pkey transport: OvfEnv --- apiVersion: v1 kind: ConfigMap metadata: name: vm-with-pkey namespace: ns01 data: hostname: centos-cloudinit user-data: | I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgIGxpc3Q6IHwKICAgICAgY2VudG9zOlZNd2FyZTEhCiAgICBleHBpcmU6IGZhbHNlCmdyb3VwczoKICAtIGRvY2tlcgp1c2VyczoKICAtIGRlZmF1bHQKICAtIG5hbWU6IGNlbnRvcwogICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgLSBzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFCSlFBQUFRRUF5Nm1kcmFMVVFacXRPL2pMN2ZLNy9mUFBPM3hod1Jvb0RnTHkzVVdDcDk5K25VVmVuYkVoOUVpSlE4M0lGQmJma09TNWJ2SDM0RFI3ZzFoeHdQL3FtNzQxWW1lYXRkWlE3NmZkWnpGVU9vemhocEJnWE5YNjVvTk1ZcFF5VHF5M2l4NXQyQTdUOElscDRWS01YVkJWZzlWMGROTzhTN2R5N3pSTTJpYzRXeXZsUWFvdkJQblRHa2djb0pFN0FNLzRwc014dkFmUDZsbm54V1A5VVNsQlBoOFFzUG41S3A3VVFaYTcvMmpieS9FL1NQNWNrVmNwWGRMVkh5aW82NDNoWHZBZXlVcUJ4eVBVYi9YQVhPTUs5TStYTVVkSzRzbEtOdzBSTm1lVU9SUk5DY3lqMk1ocnZtMjI4WlZoYnBhd1VVT25GclAwZ2d1NldxdWZlUGVzVHc9PSByc2Eta2V5LTIwMjEwNjIyCiAgICBzdWRvOiBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBncm91cHM6IHN1ZG8sIGRvY2tlcgogICAgc2hlbGw6IC9iaW4vYmFzaApuZXR3b3JrOgogIHZlcnNpb246IDIKICBldGhlcm5ldHM6CiAgICAgIGVuczE5MjoKICAgICAgICAgIGRoY3A0OiB0cnVlCg== ``` 这个例子中,hostname导入到虚拟机ovfEnv中,而前面的cloud-init配置则导入到ovfEnv的user-data字段中,各司其职。 ``` [root@cli-vm tanzu]# kubectl apply -f vm-cloudinit-config.yaml virtualmachineservice.vmoperator.vmware.com/svc-vm-cloudinit-conf created virtualmachine.vmoperator.vmware.com/vm-cloudinit-conf created configmap/vm-with-pkey configured [root@cli-vm tanzu]# kubectl get vm,svc NAME POWERSTATE AGE virtualmachine.vmoperator.vmware.com/just-vm poweredOn 120m virtualmachine.vmoperator.vmware.com/vm-cloudinit-conf poweredOn 6m27s virtualmachine.vmoperator.vmware.com/vm-with-pkey poweredOn 47m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/svc-just-vm LoadBalancer 10.96.2.154 192.168.130.34 22:32620/TCP 123m service/svc-vm-cloudinit-conf LoadBalancer 10.96.1.149 192.168.130.36 22:30727/TCP 6m27s service/svc-vm-with-pkey LoadBalancer 10.96.1.233 192.168.130.35 22:32172/TCP 47m ``` 系统很淡定的把该部署的资源都部署出来,用自定义的用户“centos” SSH到虚拟机中: ![](https://i.imgur.com/pOkwTKH.png) 结果也是如预期的一样,操作系统按预定的设置运行。 ## 使用Cloud-init部署应用 最后一个演示是通过cloud-init在使用VM Service部署的虚拟机中自动安装应用并配置。 ``` [root@cli-vm tanzu]# cat centos-user-data-nginx #cloud-config chpasswd: list: | centos:VMware1! expire: false groups: - docker users: - default - name: centos ssh-authorized-keys: - ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAy6mdraLUQZqtO/jL7fK7/fPPO3xhwRooDgLy3UWCp99+nUVenbEh9EiJQ83IFBbfkOS5bvH34DR7g1hxwP/qm741YmeatdZQ76fdZzFUOozhhpBgXNX65oNMYpQyTqy3ix5t2A7T8Ilp4VKMXVBVg9V0dNO8S7dy7zRM2ic4WyvlQaovBPnTGkgcoJE7AM/4psMxvAfP6lnnxWP9USlBPh8QsPn5Kp7UQZa7/2jby/E/SP5ckVcpXdLVHyio643hXvAeyUqBxyPUb/XAXOMK9M+XMUdK4slKNw0RNmeUORRNCcyj2Mhrvm228ZVhbpawUUOnFrP0ggu6WqufePesTw== rsa-key-20210622 sudo: ALL=(ALL) NOPASSWD:ALL groups: sudo, docker shell: /bin/bash network: version: 2 ethernets: ens192: dhcp4: true package_update: true packages: - nginx - net-tools runcmd: - echo '<h1>Demo apps from vSphere VM Serivce with Cloud-init</h1>' > /usr/share/nginx/html/index.html - chown root:root /usr/share/nginx/html/index.html - systemctl start nginx - firewall-offline-cmd --add-service=http - firewall-cmd --reload ``` 和上一个cloud-init配置文件相比,这次的增加了安装Nginx以及net-tools用于排错,同时还简单配置了Nginx的内容。既然是Nginx,还要把LB Service暴露的端口增加80。再次把该文件转成base64编码并应用在ConfigMap中: ``` apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachineService metadata: name: svc-vm-nginx-cloudinit namespace: ns01 spec: selector: app: vm-nginx-cloudinit type: LoadBalancer ports: - name: web port: 80 protocol: TCP targetPort: 80 - name: ssh port: 22 protocol: TCP targetPort: 22 --- apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachine metadata: name: vm-nginx-cloudinit namespace: ns01 labels: app: vm-nginx-cloudinit spec: imageName: centos-stream-8-vmservice-v1alpha1-1619529007339 className: best-effort-xsmall powerState: poweredOn storageClass: vwk-storage-policy networkInterfaces: - networkType: nsx-t vmMetadata: configMapName: vm-nginx-cloudinit transport: OvfEnv --- apiVersion: v1 kind: ConfigMap metadata: name: vm-nginx-cloudinit namespace: ns01 data: hostname: centos-nginx-cloudinit user-data: | I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICAgIGxpc3Q6IHwKICAgICAgY2VudG9zOlZNd2FyZTEhCiAgICBleHBpcmU6IGZhbHNlCmdyb3VwczoKICAtIGRvY2tlcgp1c2VyczoKICAtIGRlZmF1bHQKICAtIG5hbWU6IGNlbnRvcwogICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgLSBzc2gtcnNhIEFBQUFCM056YUMxeWMyRUFBQUFCSlFBQUFRRUF5Nm1kcmFMVVFacXRPL2pMN2ZLNy9mUFBPM3hod1Jvb0RnTHkzVVdDcDk5K25VVmVuYkVoOUVpSlE4M0lGQmJma09TNWJ2SDM0RFI3ZzFoeHdQL3FtNzQxWW1lYXRkWlE3NmZkWnpGVU9vemhocEJnWE5YNjVvTk1ZcFF5VHF5M2l4NXQyQTdUOElscDRWS01YVkJWZzlWMGROTzhTN2R5N3pSTTJpYzRXeXZsUWFvdkJQblRHa2djb0pFN0FNLzRwc014dkFmUDZsbm54V1A5VVNsQlBoOFFzUG41S3A3VVFaYTcvMmpieS9FL1NQNWNrVmNwWGRMVkh5aW82NDNoWHZBZXlVcUJ4eVBVYi9YQVhPTUs5TStYTVVkSzRzbEtOdzBSTm1lVU9SUk5DY3lqMk1ocnZtMjI4WlZoYnBhd1VVT25GclAwZ2d1NldxdWZlUGVzVHc9PSByc2Eta2V5LTIwMjEwNjIyCiAgICBzdWRvOiBBTEw9KEFMTCkgTk9QQVNTV0Q6QUxMCiAgICBncm91cHM6IHN1ZG8sIGRvY2tlcgogICAgc2hlbGw6IC9iaW4vYmFzaApuZXR3b3JrOgogIHZlcnNpb246IDIKICBldGhlcm5ldHM6CiAgICAgIGVuczE5MjoKICAgICAgICAgIGRoY3A0OiB0cnVlCnBhY2thZ2VfdXBkYXRlOiB0cnVlCnBhY2thZ2VzOgogIC0gbmdpbngKICAtIG5ldC10b29scwpydW5jbWQ6CiAgLSBlY2hvICc8aDE+RGVtbyBhcHBzIGZyb20gdlNwaGVyZSBWTSBTZXJpdmNlIHdpdGggQ2xvdWQtaW5pdDwvaDE+JyA+IC91c3Ivc2hhcmUvbmdpbngvaHRtbC9pbmRleC5odG1sCiAgLSBjaG93biByb290OnJvb3QgL3Vzci9zaGFyZS9uZ2lueC9odG1sL2luZGV4Lmh0bWwKICAtIHN5c3RlbWN0bCBzdGFydCBuZ2lueAogIC0gZmlyZXdhbGwtb2ZmbGluZS1jbWQgLS1hZGQtc2VydmljZT1odHRwCiAgLSBmaXJld2FsbC1jbWQgLS1yZWxvYWQK ``` 观察部署结果,没有意外 ``` [root@cli-vm tanzu]# kubectl apply -f vm-nginx-cloudinit.yaml virtualmachineservice.vmoperator.vmware.com/svc-vm-nginx-cloudinit created virtualmachine.vmoperator.vmware.com/vm-nginx-cloudinit created configmap/vm-nginx-cloudinit created [root@cli-vm tanzu]# kubectl get vm,svc NAME POWERSTATE AGE virtualmachine.vmoperator.vmware.com/just-vm poweredOn 6h3m virtualmachine.vmoperator.vmware.com/vm-cloudinit-conf poweredOn 4h9m virtualmachine.vmoperator.vmware.com/vm-nginx-cloudinit poweredOn 7m29s virtualmachine.vmoperator.vmware.com/vm-with-pkey poweredOn 4h51m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/svc-just-vm LoadBalancer 10.96.2.154 192.168.130.34 22:32620/TCP 6h6m service/svc-vm-cloudinit-conf LoadBalancer 10.96.1.149 192.168.130.36 22:30727/TCP 4h9m service/svc-vm-nginx-cloudinit LoadBalancer 10.96.3.230 192.168.130.38 80:32681/TCP 7m28s service/svc-vm-with-pkey LoadBalancer 10.96.1.233 192.168.130.35 22:32172/TCP 4h51m ``` SSH是木有问题的,最后打开浏览器访问LB地址,验证应用部署和配置是否成功 ![](https://i.imgur.com/rJ90BeD.png) Bingo! 以上测试参考的链接: https://core.vmware.com/blog/introducing-virtual-machine-provisioning-kubernetes-vm-service https://www.virten.net/2021/05/getting-started-with-vsphere-with-tanzu-vm-service/ # 小结 结合vSphere with Tanzu的VM Service把历史悠久的虚拟机和当前热火朝天的K8s容器更加紧密的结合在一起,兼顾了成熟和前沿的技术栈,可以让广大客户在应用转型过程中小步快跑。就如同在已经面世了20多年并经历了充分的历史和技术验证的虚拟化技术最火热的年代仍极少有客户能100%虚拟化一样,除少量客户可以极大比例纯粹使用容器化应用外,相信大部分客户还是在vm/容器的混合架构应用下持续多年。在这个大前提下,vSphere with Tanzu这个统一应用平台可以为应用开发人员带来更加一致的体验,同时传统虚拟化运维团队仍具备充分的资源管辖权,所谓DevOps不就是这么手牵手心相印一家亲么...