or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing
xxxxxxxxxx
Workshop Guide for Demo Appliance for Tanzu Kubernetes Grid 1.2.1 Fling
Pre-requisites: https://hackmd.io/Wo960wkVRkKPvwLXS3k6VA
TKG Cluster Deployment
Step 1. SSH to TKG Demo Appliance
SSH to the TKG Demo Appliance using
root
. If you can access the VM without going over the public internet, then the address would be 192.168.2.2 or whatever address you had configured for the TKG Demo Appliance.Step 2. Deploy TKG Management Cluster
There are two methods in setting up TKG, using either the TKG UI or CLI. Both methods are documented below.
TKG UI
Run the following command to start the UI wizard:
tkg init --ui
Open up another terminal session on your workstation and run the following command to use SSH port forwarding so we can connect to the TKG UI from your local workstation:
If you are on Windows, you can use Putty to setup SSH port forwarding using the following configuration:
Under Connection->SSH->Tunnels use
8080
for Source Port and127.0.0.1:8080
for Destination and then click on the "Add" buttonUnder Session use
ssh root@192.168.2.2
(or whatever internal IP you had configured the TKG Demo Appliance) and then login with your credentialsOnce you have succesfully open the connection, open a local web browser and navigate to
localhost:8080
and you should be taken to the following screenClick on "Deploy your management cluster on VMware vSphere" to begin the deployment
IaaS Provider
Enter your VMC vCenter Server information and then click "Connect" button and fill the Datacenter and SSH Key (you can use dummy value if you don't have SSH key) but this is useful if you need to SSH into any of the TKG Nodes (username
capv
)When prompted, select
Deploy TKG Management Cluster
Managment Cluster Settings
Select Development Flavor and specify a size, then give the K8s Management Cluster a name and specify the HA Proxy VM Template. The difference between the "Development" and "Production" plan is number of Control Plane and Worker Node VMs that are deployed.
Using the "Development" plan, 1 x Control Plane and 1 x Worker Node is provisioned. Using the "Production" plan, 3 x Control Plane and 3 x Worker Node is provisioned. You can always scale up post-deployment by using the
tkg scale cluster
operation.Resources
Select
TKG
Resource Pool, VM Folder and WorkloadDatastoreMetadata
This section is optional. You can leave blank
Kubernetes Network
Select
tkg-network
and leave the other defaultsOS Image
Selec thte K8s PhotonOS Template
Customer Improvement Experience Program
You can leave the default
Review
Review all settings to ensure they match and then click "Deploy Management Cluster" button at the bottom to begin the deployment.
This can take ~6-10 minutes to complete and once the Management Cluster has been deployed, you can go close the web browser and go back to your first SSH session and do
ctrl+c
to stop the TKG UI.We can verify that all pods are up by running the following command:
TKG CLI
Edit
config.yaml
using either the vi or nano editor and update theVSPHERE_SERVER
variable with the Internal IP Address of your VMC vCenter Server (e.g. 10.2.224.4) andVSPHERE_PASSWORD
variable with the credentials for cloudadmin@vmc.local account and save the file when you have finished. If you have other changes that deviate from this example, make sure to update those as well including the name of the K8s and HA Proxy vSphere Template.Run the following command to copy our sample
config.yaml
into.tkg
directory:cp config.yaml .tkg/config.yaml
Run the following command and specify the Virtual IP Address to use for Management Control Plane to deploy K8s Management Cluster:
tkg init -i vsphere -p dev --name tkg-mgmt --vsphere-controlplane-endpoint-ip 192.168.2.10
When prompted for the two questions, you can answer
n
andy
to continue the deployment.This will take ~6-8 minutes to complete and once the Management Cluster has been deployed, we can verify that all pods are up by running the following command:
Step 3. Deploy TKG Workload Cluster
Run the following command to deploy TKG Cluster called
tkg-cluster-01
or any other name you wish to use along with the Virtual IP Address for TKG Workload Cluster Control Plane. By default, this will deploy the latest version of K8s which is currently v1.19.1This will take a few minutes to complete. Once the TKG Cluster is up and running, we need to retrieve the credentials before we can use it. To do so, run the following command:
To switch context to our newly provisioned TKG Cluster, run the following command which will be based on the name of the cluster:
Here is what your terminal prompt would look like after deployingn TKG Management Cluster:

Here is what your terminal prompt would look like after switching context to the TKG Worload Cluster:

To list all available Kubernetes contexts in case you wish to switch, you can use the following command:
Lets ensure all pods in our new TKG Cluster is up by running the following command:
Step 4. Upgrade TKG Workload Cluster
To be able to deploy earlier releases of K8s version, you will need to set the
VSPHERE_TEMPLATE
environment variable matching the name of the vSphere Template for your desired K8s version.In our example, we will depoy v1.18.10 and simply run the following command:
Next, run the following command to deploy a TKG v1.18.10 Cluster called
tkg-cluster-02
or any other name you wish to use along with the Virtual IP Address for TKG Workload Cluster Control Plane. We will use this new clsuter upgrade to latest v1.19.3.Once the new TKG v1.18.8 Cluster has been provisioned, we can confirm its version before upgrading by running the following command:
To start the upgrade, simply run the following command and specify the name of the cluster:
Depending on the size of your TKG Cluster, this operation can take some time to complete. To confirm the version of TKG Cluster after the upgrade, we can run the following command to verify:
Step 5 Tanzu Mission Control (Optional)
Login to VMware Cloud Console and ensure that you have been entitled to the Tanzu Mission Control(TMC) service. You can confirm this by making sure you see the TMC Service tile as shown below. If you are not entitled, please reach out to your VMware account team to register for TMC evaluation.
Next, click on your user name and navigate to "My Accounts" to create the required API token
Generate a new API token for the TMC service which will be required to attach our TKG Cluster that we have deployed earlier.
Make a note of the API Token as we will need this later. If you forget to copy it or have lost it, you can simply come back into this screen and just re-generate.
Lets now jump back into the TKG Demo Appliance to attach our TKG Cluster to TMC.
Run the following command to login to the TMC service and provide the API Token that you had created earlier along with a context name which is user defined.
Now we need to create a TMC Cluster Group which allows you to logically group TKG Cluster which is reflected in the TMC UI. To do so, run the following command and give it a name
To attach our TKG Cluster, we need to run the following command which will generate YAML manifest which we will need to run to actually deploy TMC Pod into our TKG Cluster
Finally, we run the apply to attach our TKG Cluster to TMC
TKG Demos
Step 1. Storage Class and Persistent Volume
Change into
storage
demo directory on the TKG Demo ApplianceRetrieve the vSphere Datastore URL from the vSphere UI which will be used to create our Storage Class definition
Edit the
defaultstorageclass.yaml
and update thedatastoreurl
propertyCreate Storage Class definition:
Confirm the Storage Class was created:
Create 2GB Persistent Volume called
pvc-test
using our default Storage Class:Confirm the Persistent Volum was created:
We can also see the new Persistent Volume in vSphere UI by navigating to the specific vSphere Datatore under
Monitor->Container Volumes
Delete the Persistent Volume:
Step 2. Simple K8s Demo App
Change into
yelb
demo directory on the TKG Demo ApplianceCreate
yelb
namespace for our demo:Deploy the yelb application:
Wait for the deployment to complete by running the following:
Retrieve the "UI" Pod ID:
Retrieve the IP Address of the Worker Node running the "UI" container:
The IP Address can be found at the top under the
Node:
property and in this example, it is192.168.2.185
If you have a desktop machine that has a browser and can access the TKG Network, you can open a browser to the following address:
http://192.168.2.185:31001
If you do not have a system, then we can still connect but we will need to setup SSH port forwarding to IP Address above.
Once you have established the SSH tunnel, you can open browser on your local system
localhost:31001
The Yelb application is interactive, so you feel free to play around with it
Befor proceeding to the next two demo, you will need to delete the
yelb
application:Step 3. Basic Load Balancer
Change into
metallb
demo directory on the TKG Demo ApplianceEdit the
metallb-config.yaml
and update theaddresses
property using small subset of the DHCP range from thetkg-network
network. In our example, we used 192.168.2.0/24, so lets chose the last 5 IP Addresses for the Metal LB to provision from.Create the
metallb-system
namespace:Create required secret:
Deploy Metal LB:
Apply our Metal LB configuration:
Verify all pods within
metallb-system
namespace is running:Step 4. Basic K8s Demo App Using Load Balancer
Change into
yelb
demo directory on the TKG Demo ApplianceDeploy the yelb Load Balancer version of the application:
Wait for the deployment to complete by running the following:
Retrieve the Load Balancer IP for the Yelb Service:
We should see an IP Address allcoated from our Metal LB range in the
EXTERNAL-IP
column. Instead of connecting directly to a specific TKG Cluster Node, we can now connect via this Load Balancer IP and you will see it is mapped to port 80 instead of the original appplicatio port on 31001If you have a desktop machine that has a browser and can access the TKG Network, you can open a browser to the following address:
http://192.168.2.245
If you do not have a system, then we can still connect but we will need to setup SSH port forwarding to IP Address above and we'll use local port of 8081 to ensure there are no conflicts
Once you have established the SSH tunnel, you can open browser on your local system
localhost:8081
Extras (Optional)
Harbor
A local Harbor instance is running on the TKG Demo Appliance and provides all the required containers for setting up TKG Clusters and TKG demos in an air-gap/non-internet environment.
You can connect to the Harbor UI by pointing a browser to the address of your TKG Demo Appliance with the following credentials:
Username:
admin
Password:
Tanzu1!
You can also login to Harbor using Docker CLI to push and/or pull additional containers by using the following:
Octant
To easily navigate, learn and debug Kubernetes a useful tool such as Octant can be used. Octant is already installed on the TKG Demo Appliance and you can launch it running the following command:
Octant listens locally on
127.0.0.1:7777
and to be able to access the Octant UI, we need to setup SSH port forwarding to TKG Demo Appliance IP on port7777
To do so, run the following command in another terminal:
Once you have established the SSH tunnel, you can open browser on your local system
localhost:7777
and you should see the Octant UI.For more information on how to use Octan, please refer to the official documentation here
Forward TKG logs to vRealize Log Intelligence Cloud
Please see https://blogs.vmware.com/management/2020/06/configure-log-forwarding-from-vmware-tanzu-kubernetes-cluster-to-vrealize-log-insight-cloud.html for more details
Monitor TKG Clusters with vRealize Operations Cloud
Please see https://blogs.vmware.com/management/2020/06/monitor-tanzu-kubernetes-clusters-using-vrealize-operations.html for more details
Setup Network Proxy for TKG Mgmt and Workload Clusters
Please see https://www.virtuallyghetto.com/2020/05/how-to-configure-network-proxy-with-standalone-tanzu-kubernetes-grid-tkg.html for more details