owned this note
owned this note
Published
Linked with GitHub
# Workshop Guide for Demo Appliance for Tanzu Kubernetes Grid 1.3.1 Fling
Pre-requisites: https://hackmd.io/rHOUvdS3QsGlhCpGRU-hOQ
# TKG Cluster Deployment
## Step 1. SSH to TKG Demo Appliance
SSH to the TKG Demo Appliance using `root`. If you can access the VM without going over the public internet, then the address would be 192.168.2.2 or whatever address you had configured for the TKG Demo Appliance.
![](https://i.imgur.com/jZnOWqq.png)
## Step 2. Create SSH Key
Createa public SSH key which is required for deployment and can be used for debugging and troubleshooting purposes. Run `ssh-kegen` command and just hit return on all the defaults
```
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
...
```
The generated public key is stored in `/root/.ssh/id_rsa.pub` and this will be required for setting up both TKG Management and Workload Cluster.
## Step 3. Deploy TKG Management Cluster
TKG Management Cluster can be deployed using either the Tanzu UI or CLI. When using the TKG Demo Appliance, the Tanzu CLI method is the only supported option due to its use of an embedded private Harbor Registry.
### Tanzu CLI
The TKG Demo Appliance includes 6 example YAML files:
For VMware Cloud on AWS environment
* **vmc-tkg-mgmt-template.yaml**
* **vmc-tkg-workload-1-template.yaml**
* **vmc-tkg-workload-2-template.yaml**
For On-Premises vSphere envionrment
* **vsphere-tkg-mgmt-template.yaml**
* **vsphere-tkg-workload-1-template.yaml**
* **vsphere-tkg-workload-2-template.yaml**
Edit the respective TKG Mgmt and Workload Templates based on your enviornment using either the vi or nano editor and update the `VSPHERE_SERVER` variable with the Internal IP Address of your VMC vCenter Server (e.g. 10.2.224.4) and `VSPHERE_PASSWORD` variable with the credentials for cloudadmin@vmc.local account.
```
VSPHERE_SERVER: 10.2.224.4
VSPHERE_PASSWORD: FILL-ME-IN
````
You will also need to update the `VSPHERE_SSH_AUTHORIZED_KEY` variable and fill in the SSH key that was generated from Step 2 and save the file when you have finished.
```
VSPHERE_SSH_AUTHORIZED_KEY: FILL-ME-IN
```
If you have other changes that deviate from this example, make sure to update those as well.
> **Note:** The YAML files is just an example for demo purposes. You can update other variables to match your environment.
In this example, the Virtual IP Address that will be used for the Management Control Plane to deploy K8s Management Cluster will be `192.168.2.3` which can be changed by updating the `VSPHERE_CONTROL_PLANE_ENDPOINT` variable
Run the following command to create the TKG Management Cluster:
```
# tanzu management-cluster create -f vmc-tkg-mgmt-template.yaml -v2
Validating the pre-requisites...
vSphere 7.0 Environment Detected.
You have connected to a vSphere 7.0 environment which does not have vSphere with Tanzu enabled. vSphere with Tanzu includes
an integrated Tanzu Kubernetes Grid Service which turns a vSphere cluster into a platform for running Kubernetes workloads in dedicated
resource pools. Configuring Tanzu Kubernetes Grid Service is done through vSphere HTML5 client.
Tanzu Kubernetes Grid Service is the preferred way to consume Tanzu Kubernetes Grid in vSphere 7.0 environments. Alternatively you may
deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0.
Deploying TKG management cluster on vSphere 7.0 ...
Identity Provider not configured. Some authentication features won't work.
Setting up management cluster...
Validating configuration...
Using infrastructure provider vsphere:v0.7.7
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /root/.kube-tkg/tmp/config_ERY0D78L
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.14" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.7" TargetNamespace="capv-system"
Start creating management cluster...
Saving management cluster kubeconfig into /root/.kube/config
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.14" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.7" TargetNamespace="capv-system"
Waiting for the management cluster to get ready for move...
Waiting for addons installation...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Waiting for additional components to be up and running...
Context set for management cluster tkg-mgmt-01 as 'tkg-mgmt-01-admin@tkg-mgmt-01'.
Management cluster created!
You can now create your first workload cluster by running the following:
tanzu cluster create [name] -f [file]
Some addons might be getting installed! Check their status by running the following:
kubectl get apps -A
```
## Step 3. Deploy TKG Workload Cluster
In this example, the Virtual IP Address that will be used for the Management Control Plane to deploy K8s Workload Cluster called `tkg-cluster-01` will be `192.168.2.4` which can be changed by updating the `VSPHERE_CONTROL_PLANE_ENDPOINT` variable. By default, this will deploy the latest version of K8s.
Run the following command to create the first TKG Workload Cluster:
```
# tanzu cluster create -f vmc-tkg-workload-1-template.yaml
Validating configuration...
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'tkg-cluster-01'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...
Waiting for addons installation...
Workload cluster 'tkg-cluster-01' created
```
> **Note:** This can take up to 10 minutes to complete
Once the TKG Cluster is up and running, we need to retrieve the credentials before we can use it. To do so, run the following command:
```
# tanzu cluster kubeconfig get --admin tkg-cluster-01
Credentials of workload cluster 'tkg-cluster-01' have been saved
You can now access the cluster by running 'kubectl config use-context tkg-cluster-01-admin@tkg-cluster-01'
```
To switch context to our newly provisioned TKG Cluster, run the following command which will be based on the name of the cluster:
```
# k config use-context tkg-cluster-01-admin@tkg-cluster-01
```
> **Note:** Another benefit of the TKG Demo Appliance is that the terminal prompt is automatically updated based on TKG Cluster context that you are currently in. If you had TKG CLI running on your desktop, this would not be known and you would have to interograte your context by using standard kubectl commands.
Here is what your terminal prompt would look like after deployingn TKG Management Cluster:
![](https://i.imgur.com/XXiw8wu.png)
Here is what your terminal prompt would look like after switching context to the TKG Worload Cluster:
![](https://i.imgur.com/K74XIbe.png)
To list all available Kubernetes contexts in case you wish to switch, you can use the following command:
```
# k config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tkg-cluster-01-admin@tkg-cluster-01 tkg-cluster-01 tkg-cluster-01-admin
tkg-mgmt-01-admin@tkg-mgmt-01 tkg-mgmt-01 tkg-mgmt-01-admin
```
Lets ensure all pods in our new TKG Cluster is up by running the following command:
```
# k get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system antrea-agent-25jql 2/2 Running 0 5m53s
kube-system antrea-agent-mqtt9 2/2 Running 1 4m15s
kube-system antrea-agent-rdp75 2/2 Running 0 5m54s
kube-system antrea-agent-rgkrq 2/2 Running 0 5m54s
kube-system antrea-agent-sxvmr 2/2 Running 0 95s
kube-system antrea-agent-xmt66 2/2 Running 0 5m53s
kube-system antrea-controller-744f9d6565-fcwcs 1/1 Running 0 5m54s
kube-system coredns-6d8c4747b6-75rrt 1/1 Running 0 6m55s
kube-system coredns-6d8c4747b6-pbvdc 1/1 Running 0 6m55s
kube-system etcd-tkg-cluster-01-control-plane-4dvqr 1/1 Running 0 4m4s
kube-system etcd-tkg-cluster-01-control-plane-65hb8 1/1 Running 0 92s
kube-system etcd-tkg-cluster-01-control-plane-fzc27 1/1 Running 0 6m42s
kube-system kube-apiserver-tkg-cluster-01-control-plane-4dvqr 1/1 Running 0 4m13s
kube-system kube-apiserver-tkg-cluster-01-control-plane-65hb8 1/1 Running 0 93s
kube-system kube-apiserver-tkg-cluster-01-control-plane-fzc27 1/1 Running 0 6m43s
kube-system kube-controller-manager-tkg-cluster-01-control-plane-4dvqr 1/1 Running 0 4m13s
kube-system kube-controller-manager-tkg-cluster-01-control-plane-65hb8 1/1 Running 0 93s
kube-system kube-controller-manager-tkg-cluster-01-control-plane-fzc27 1/1 Running 1 6m43s
kube-system kube-proxy-7488n 1/1 Running 0 5m54s
kube-system kube-proxy-9h77v 1/1 Running 0 5m54s
kube-system kube-proxy-dmf7g 1/1 Running 0 4m15s
kube-system kube-proxy-jnjk5 1/1 Running 0 95s
kube-system kube-proxy-l4twp 1/1 Running 0 6m55s
kube-system kube-proxy-x9fqj 1/1 Running 0 5m53s
kube-system kube-scheduler-tkg-cluster-01-control-plane-4dvqr 1/1 Running 0 4m4s
kube-system kube-scheduler-tkg-cluster-01-control-plane-65hb8 1/1 Running 0 94s
kube-system kube-scheduler-tkg-cluster-01-control-plane-fzc27 1/1 Running 1 6m43s
kube-system kube-vip-tkg-cluster-01-control-plane-4dvqr 1/1 Running 0 4m13s
kube-system kube-vip-tkg-cluster-01-control-plane-65hb8 1/1 Running 0 93s
kube-system kube-vip-tkg-cluster-01-control-plane-fzc27 1/1 Running 0 6m43s
kube-system metrics-server-6f7854bb4f-rgmkl 1/1 Running 0 6m28s
kube-system vsphere-cloud-controller-manager-46dfc 1/1 Running 1 5m57s
kube-system vsphere-cloud-controller-manager-g6dnf 1/1 Running 0 3m17s
kube-system vsphere-cloud-controller-manager-rwwkd 1/1 Running 0 89s
kube-system vsphere-csi-controller-79f8c6d6d8-l5zb8 6/6 Running 4 6m29s
kube-system vsphere-csi-node-g4rvl 3/3 Running 0 5m53s
kube-system vsphere-csi-node-mg9wc 3/3 Running 0 5m54s
kube-system vsphere-csi-node-ng94q 3/3 Running 0 95s
kube-system vsphere-csi-node-pt7kw 3/3 Running 0 5m54s
kube-system vsphere-csi-node-wmxdz 3/3 Running 0 6m29s
kube-system vsphere-csi-node-zf4hw 3/3 Running 0 4m15s
tkg-system kapp-controller-7466874568-vxjvz 1/1 Running 0 6m57s
```
## Step 4. Upgrade TKG Workload Cluster
To be able to deploy earlier releases of K8s version, we can list the diffrent K8s releases by using the following command:
```
# tanzu kubernetes-release get
NAME VERSION COMPATIBLE UPGRADEAVAILABLE
v1.19.9---vmware.2-tkg.1 v1.19.9+vmware.2-tkg.1 True True
v1.20.5---vmware.2-tkg.1 v1.20.5+vmware.2-tkg.1 True False
```
> **Note:** The TKG Demo Appliance has only been configured to support the latest two versions
To deploy TKG v1.19.9 for example, we will need to specify version as `v1.19.9---vmware.2-tkg.1` using the `--tkr` command-line option.
Run the following command to deploy a TKG v1.19.9 Workload Cluster called `tkg-cluster-02` which will use `192.168.2.5` as the Virtual IP Address for TKG Workload Cluster Control Plane.
```
# tanzu cluster create -f vmc-tkg-workload-2-template.yaml --tkr v1.19.9---vmware.2-tkg.1
Validating configuration...
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'tkg-cluster-02'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...
Waiting for addons installation...
Workload cluster 'tkg-cluster-02' created
```
> **Note:** This can take up to 10 minutes to complete
Once the new TKG v1.19.9 Cluster has been provisioned, we can confirm its version before upgrading by running the following command:
```
# tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
tkg-cluster-01 default running 3/3 3/3 v1.20.5+vmware.2 <none> prod
tkg-cluster-02 default running 3/3 3/3 v1.19.9+vmware.2 <none> prod
```
To start the upgrade, simply run the following command and specify the name of the cluster:
```
# tanzu cluster upgrade tkg-cluster-02
Upgrading workload cluster 'tkg-cluster-02' to kubernetes version 'v1.20.5+vmware.2'. Are you sure? [y/N]: y
Validating configuration...
Verifying kubernetes version...
Retrieving configuration for upgrade cluster...
Using custom image repository: registry.rainpole.io/library
Create InfrastructureTemplate for upgrade...
Configuring cluster for upgrade...
Upgrading control plane nodes...
Patching KubeadmControlPlane with the kubernetes version v1.20.5+vmware.2...
Warning: Image accessibility verification failed. Image registry.rainpole.io/library/kube-proxy:v1.19.9_vmware.2 is not reachable from current machine. Please make sure the image is pullable from the Kubernetes node for upgrade to complete successfully
Waiting for kubernetes version to be updated for control plane nodes
Upgrading worker nodes...
Patching MachineDeployment with the kubernetes version v1.20.5+vmware.2...
Waiting for kubernetes version to be updated for worker nodes...
updating additional components: 'metadata/tkg' ...
updating additional components: 'addons-management/kapp-controller' ...
Cluster 'tkg-cluster-02' successfully upgraded to kubernetes version 'v1.20.5+vmware.2'
```
> **Note:** The warning message about Image accessiblity is expected and is a benign message due to the air-gap setup of the TKG Demo Appliance
Depending on the size of your TKG Cluster, this operation can take some time to complete. To confirm the version of TKG Cluster after the upgrade, we can run the following command to verify:
```
# tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
tkg-cluster-01 default running 3/3 3/3 v1.20.5+vmware.2 <none> prod
tkg-cluster-02 default running 3/3 3/3 v1.20.5+vmware.2 <none> prod
```
## Step 5 Tanzu Mission Control (Optional)
> **Note:** Internet connectivity will be required from the TKG Network to reach Tanzu Mission Control service.
Login to [VMware Cloud Console](https://console.cloud.vmware.com/) and ensure that you have been entitled to the Tanzu Mission Control(TMC) service. You can confirm this by making sure you see the TMC Service tile as shown below. If you are not entitled, please reach out to your VMware account team to register for TMC evaluation.
![](https://i.imgur.com/bjAae69.png)
### Register TKG Management Cluster
This enables the ability to manage and easily register TKG Workload Clusters directly from the TMC console.
Click on **Cluster groups** and create a new TMC Cluster Group. In this example, the group is named **vmc-cluster**
![](https://i.imgur.com/147429D.png)
Navigate to **Administration** and then click on **Register Management Cluster** button to begin the process. Provide a name for our TKG Management Cluster which in this example will be named **tkg-mgmt-01** and select the TMC Cluster Group that was created in previous step.
![](https://i.imgur.com/yT736Dh.png)
The last step is to deploy the TMC Control Agent to our TKG Management Cluster.
![](https://i.imgur.com/wiRjwZr.png)
Copy the TMC URL and run the following command within the context of the TKG Management Cluster.
```
# k config use-context tkg-mgmt-01-admin@tkg-mgmt-01
# k apply -f '<tmcControlAgentYAMLURL>'
namespace/vmware-system-tmc created
configmap/stack-config created
secret/tmc-access-secret created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionconfigs.intents.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionintegrations.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensionresourceowners.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created
serviceaccount/extension-updater-serviceaccount created
podsecuritypolicy.policy/vmware-system-tmc-agent-restricted created
clusterrole.rbac.authorization.k8s.io/vmware-system-tmc-psp-agent-restricted created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/extension-updater-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/vmware-system-tmc-psp-agent-restricted created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/extension-updater-clusterrolebinding created
deployment.apps/extension-updater created
serviceaccount/agent-updater created
clusterrole.rbac.authorization.k8s.io/agent-updater-role created
clusterrolebinding.rbac.authorization.k8s.io/agent-updater-rolebinding created
deployment.apps/agent-updater created
cronjob.batch/agentupdater-workload created
```
Once the deployment has completed, navigate back to TMC Console and click on **View Management Cluster** and you should now see TKG Management Cluster has been successfully registered.
![](https://i.imgur.com/2fwri4o.png)
> **Note:** Provisioning of TKG Workload Cluster using the TMC UI is not supported with the air-gap configuratios of TKG Demo Appliance.
### TMC UI to Attach TKG Workload Cluster
Click on **Workload Clusters** tab and then select all TKG Workload Cluster that you wish to attach to TMC and click on **Manage Clusters** button.
![](https://i.imgur.com/RrstmeN.png)
You will be prompted to select the TMC Cluster Group and click **Manage** button to complete the process.
![](https://i.imgur.com/DUbfsHr.png)
Once the operation has completed, the **Managed** column for the TKG Workload Cluster should now show **Yes**
### Manually Attach TKG Workload Cluster
This enables the ability to manage existing TKG Workload Clusters that have been deployed using the TKG CLI.
Click on your user name and navigate to "My Accounts" to create the required API token
![](https://i.imgur.com/Re1rrjk.png)
Generate a new API token for the TMC service which will be required to attach our TKG Cluster that we have deployed earlier.
![](https://i.imgur.com/hszgDVL.png)
Make a note of the API Token as we will need this later. If you forget to copy it or have lost it, you can simply come back into this screen and just re-generate.
Lets now jump back into the TKG Demo Appliance to attach our TKG Cluster to TMC.
Run the following command to login to the TMC service and provide the API Token that you had created earlier along with a context name which is user defined.
```
# tmc login -c -n tkg-cluster-01
```
![](https://i.imgur.com/HkRpwcc.png)
To attach our TKG Cluster, we need to run the following command which will generate YAML manifest which we will need to run to actually deploy TMC Pod into our TKG Cluster
```
# tmc cluster attach -g vmc-cluster -n vmc-tkg-cluster-01
✔ cluster "vmc-tkg-cluster-01" created successfully
ℹ Run `kubectl apply -f k8s-attach-manifest.yaml` to attach the cluster
```
Finally, we run the apply to attach our TKG Cluster to TMC
```
# k apply -f k8s-attach-manifest.yaml
namespace/vmware-system-tmc created
configmap/stack-config created
secret/tmc-client-secret created
customresourcedefinition.apiextensions.k8s.io/agents.clusters.tmc.cloud.vmware.com created
customresourcedefinition.apiextensions.k8s.io/extensions.clusters.tmc.cloud.vmware.com created
serviceaccount/extension-manager created
clusterrole.rbac.authorization.k8s.io/extension-manager-role created
clusterrolebinding.rbac.authorization.k8s.io/extension-manager-rolebinding created
service/extension-manager-service created
deployment.apps/extension-manager created
serviceaccount/extension-updater-serviceaccount created
clusterrole.rbac.authorization.k8s.io/extension-updater-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/extension-updater-clusterrolebinding created
service/extension-updater created
deployment.apps/extension-updater created
serviceaccount/agent-updater created
clusterrole.rbac.authorization.k8s.io/agent-updater-role created
clusterrolebinding.rbac.authorization.k8s.io/agent-updater-rolebinding created
deployment.apps/agent-updater created
cronjob.batch/agentupdater-workload created
```
![](https://i.imgur.com/blxDjKg.png)
# TKG Demos
## Step 1. Storage Class and Persistent Volume
Change into `storage` demo directory on the TKG Demo Appliance
```
cd /root/demo/storage
```
Retrieve the vSphere Datastore URL from the vSphere UI which will be used to create our Storage Class definition
![](https://i.imgur.com/rBp1ZYb.png)
> **Note:** There are many ways of associating a vSphere Datastore to Storage Class definition. In VMC, this is currently required as the Cloud Native Storage/Container Storage Interface in VMC can leverage vSphere Tags yet.
Edit the `defaultstorageclass.yaml` and update the `datastoreurl` property
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
datastoreurl: ds:///vmfs/volumes/vsan:c4f738225e324622-b1e100dd1abe8249/
```
> **Note** Make there is a trailing "/" at the end of the datastoreurl value, this is required. If you copy the URL from the vSphere UI, the correct URL will be included.
Create Storage Class definition:
```
# k apply -f defaultstorageclass.yaml
persistentvolumeclaim/pvc-test created
```
Confirm the Storage Class was created:
```
# k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) csi.vsphere.vmware.com Delete Immediate false 3s
```
Create 2GB Persistent Volume called `pvc-test` using our default Storage Class:
```
# k apply -f pvc.yaml
persistentvolumeclaim/pvc-test created
```
Confirm the Persistent Volum was created:
```
# k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-test Bound pvc-ff86c605-d600-4bcc-ac03-9f511fe7cb77 2Gi RWO standard 13m
```
> **Note:** This can take up to a few seconds before the PV has been realized in vSphere
We can also see the new Persistent Volume in vSphere UI by navigating to the specific vSphere Datatore under `Monitor->Container Volumes`
![](https://i.imgur.com/bzQzGSG.png)
Delete the Persistent Volume:
```
# k delete pvc pvc-test
persistentvolumeclaim "pvc-test" deleted
```
## Step 2. Simple K8s Demo App
Change into `yelb` demo directory on the TKG Demo Appliance
```
cd /root/demo/yelb
```
Create `yelb` namespace for our demo:
```
k create ns yelb
```
Deploy the yelb application:
```
# k apply -f yelb.yaml
service/redis-server created
service/yelb-db created
service/yelb-appserver created
service/yelb-ui created
deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created
```
Wait for the deployment to complete by running the following:
```
# k -n yelb get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
redis-server 1/1 1 1 47s
yelb-appserver 1/1 1 1 47s
yelb-db 1/1 1 1 47s
yelb-ui 1/1 1 1 47s
```
Retrieve the "UI" Pod ID:
```
# k -n yelb get pods | grep ui
yelb-ui-79c68df689-66bwq 1/1 Running 0 2m
```
Retrieve the IP Address of the Worker Node running the "UI" container:
```
# k -n yelb describe pod yelb-ui-79c68df689-66bwq
Name: yelb-ui-79c68df689-66bwq
Namespace: yelb
Priority: 0
Node: william-02-md-0-848878f85b-vddk4/192.168.2.185
Start Time: Tue, 24 Mar 2020 18:19:23 +0000
...
...
```
The IP Address can be found at the top under the `Node:` property and in this example, it is `192.168.2.185`
If you have a desktop machine that has a browser and can access the TKG Network, you can open a browser to the following address: `http://192.168.2.185:31001`
If you do not have a system, then we can still connect but we will need to setup SSH port forwarding to IP Address above.
```
ssh root@[TKG-DEMO-APPLIANCE-IP] -L 30001:192.168.2.185:31001
```
Once you have established the SSH tunnel, you can open browser on your local system `localhost:31001`
The Yelb application is interactive, so you feel free to play around with it
![](https://i.imgur.com/0jffUU7.png)
> **Note:** Details about this demo app can be found [here](https://www.virtuallyghetto.com/2020/03/how-to-fix-extensions-v1beta1-missing-required-field-selector-for-yelb-kubernetes-application.html) and the original author of the application is Massimo Re'Ferre (his legacy lives on as Ex-VMware)
Befor proceeding to the next two demo, you will need to delete the `yelb` application:
```
k delete -f yelb.yaml
```
## Step 3. Basic Load Balancer
Change into `metallb` demo directory on the TKG Demo Appliance
```
cd /root/demo/metallb
```
Edit the `metallb-config.yaml` and update the `addresses` property using small subset of the DHCP range from the `tkg-network` network. In our example, we used 192.168.2.0/24, so lets chose the last 5 IP Addresses for the Metal LB to provision from.
```
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.2.246-192.168.2.250
```
Create the `metallb-system` namespace:
```
# k create ns metallb-system
namespace/metallb-system created
```
Create required secret:
```
# k create secret generic -n metallb-system memberlist --from-literal=secretkey="\$(openssl rand -base64 128)"
secret/memberlist created
```
Deploy Metal LB:
```
# k apply -n metallb-system -f metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
```
Apply our Metal LB configuration:
```
# kubectl apply -n metallb-system -f metallb-config.yaml
configmap/config created
```
Verify all pods within `metallb-system` namespace is running:
```
# k -n metallb-system get pod
NAME READY STATUS RESTARTS AGE
controller-66fdff65b9-bwxq7 1/1 Running 0 2m40s
speaker-8jz5m 1/1 Running 0 2m40s
speaker-bmxcq 1/1 Running 0 2m40s
speaker-fsxq5 1/1 Running 0 2m40s
speaker-t2pgj 1/1 Running 0 2m40s
speaker-vggr8 1/1 Running 0 2m40s
speaker-zjw7v 1/1 Running 0 2m40s
```
## Step 4. Basic K8s Demo App Using Load Balancer
Change into `yelb` demo directory on the TKG Demo Appliance
```
cd /root/demo/yelb
```
Deploy the yelb Load Balancer version of the application:
```
# k apply -f yelb-lb.yaml
service/redis-server created
service/yelb-db created
service/yelb-appserver created
service/yelb-ui created
deployment.apps/yelb-ui created
deployment.apps/redis-server created
deployment.apps/yelb-db created
deployment.apps/yelb-appserver created
```
Wait for the deployment to complete by running the following:
```
# k -n yelb get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
redis-server 1/1 1 1 52s
yelb-appserver 1/1 1 1 52s
yelb-db 1/1 1 1 52s
yelb-ui 1/1 1 1 52s
```
Retrieve the Load Balancer IP for the Yelb Service:
```
# k -n yelb get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-server ClusterIP 100.66.186.168 <none> 6379/TCP 84s
yelb-appserver ClusterIP 100.65.7.122 <none> 4567/TCP 84s
yelb-db ClusterIP 100.69.33.14 <none> 5432/TCP 84s
yelb-ui LoadBalancer 100.68.255.238 192.168.2.245 80:31376/TCP 84s
```
We should see an IP Address allcoated from our Metal LB range in the `EXTERNAL-IP` column. Instead of connecting directly to a specific TKG Cluster Node, we can now connect via this Load Balancer IP and you will see it is mapped to port 80 instead of the original appplicatio port on 31001
If you have a desktop machine that has a browser and can access the TKG Network, you can open a browser to the following address: `http://192.168.2.245`
If you do not have a system, then we can still connect but we will need to setup SSH port forwarding to IP Address above and we'll use local port of 8081 to ensure there are no conflicts
```
ssh root@[TKG-DEMO-APPLIANCE-IP] -L 8081:192.168.2.245:80
```
Once you have established the SSH tunnel, you can open browser on your local system `localhost:8081`
# Extras (Optional)
## Harbor
A local Harbor instance is running on the TKG Demo Appliance and provides all the required containers for setting up TKG Clusters and TKG demos in an air-gap/non-internet environment.
You can connect to the Harbor UI by pointing a browser to the address of your TKG Demo Appliance with the following credentials:
Username: `admin`
Password: `Tanzu1!`
![](https://i.imgur.com/evV3YoG.png)
You can also login to Harbor using Docker CLI to push and/or pull additional containers by using the following:
```
# docker login -u admin -p Tanzu1! registry.rainpole.io/library
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
```
> **Note:** You must use `registry.rainpole.io` which is a real certificate generated by letsencrypt which requires a domain that you own using DNS-challenge. To support air-gap/non-internet scenarios, TKG today requires a registry that has a proper signed certificate (self-sign and custom CA) are NOT currently supported.
## Octant
To easily navigate, learn and debug Kubernetes a useful tool such as Octant can be used. Octant is already installed on the TKG Demo Appliance and you can launch it running the following command:
```
octant
```
![](https://i.imgur.com/BXzpSBp.png)
Octant listens locally on `127.0.0.1:7777` and to be able to access the Octant UI, we need to setup SSH port forwarding to TKG Demo Appliance IP on port `7777`
To do so, run the following command in another terminal:
```
ssh root@[TKG-DEMO-APPLIANCE-IP] -L 7777:127.0.0.1:7777
```
> **Note:** You can also use Putty to setup SSH port forwarding, please refer to the TKG UI for instructions.
Once you have established the SSH tunnel, you can open browser on your local system `localhost:7777` and you should see the Octant UI.
![](https://i.imgur.com/qyKNwxr.png)
For more information on how to use Octan, please refer to the official documentation [here](https://octant.dev/docs/master/)
## Forward TKG logs to vRealize Log Intelligence Cloud
> **Note:** Internet connectivity will be required from the TKG Network to reach the vRLIC service.
Please see [https://blogs.vmware.com/management/2020/06/configure-log-forwarding-from-vmware-tanzu-kubernetes-cluster-to-vrealize-log-insight-cloud.html](https://blogs.vmware.com/management/2020/06/configure-log-forwarding-from-vmware-tanzu-kubernetes-cluster-to-vrealize-log-insight-cloud.html) for more details
## Monitor TKG Clusters with vRealize Operations Cloud
> **Note:** Internet connectivity will be required from the TKG Network to reach the vROPs Cloud service.
Please see [https://blogs.vmware.com/management/2020/06/monitor-tanzu-kubernetes-clusters-using-vrealize-operations.html](https://blogs.vmware.com/management/2020/06/monitor-tanzu-kubernetes-clusters-using-vrealize-operations.html) for more details
## Setup Network Proxy for TKG Mgmt and Workload Clusters
Please see [https://www.virtuallyghetto.com/2020/05/how-to-configure-network-proxy-with-standalone-tanzu-kubernetes-grid-tkg.html](https://www.virtuallyghetto.com/2020/05/how-to-configure-network-proxy-with-standalone-tanzu-kubernetes-grid-tkg.html) for more details