# Anushaya demo
## Actual demo
1. What problem are we solving?
- Customers want to run closer to metal
- Customers have an image they want to use
- Lack of CAPI support for HyperV
- Limited resources - not enough to run ROBO at edge
2. Why BYOH?
- Low hanging fruit to get us in the door - Stop bleeding customers to the competition
- stepping stone to CAP-T
3. What is BYOH?
- TKG on hosts that it didn't provision. customer brings host + OS. We do k8s upwards
4. Approach / Solution overview
- Demo PPT that Anushaya made
5. Demo
- Anushaya
- create mgmt cluster
- install byoh provider using clusterctl
- provision vsphere cluster (call out that we are NOT support mixed mode infra providers in MVP)
- ubuntu VM - copy stuff, run stuff
- show statuses
- show CRSes applies the CNI
- deploy workload and show its on the BYOH
7. Coming up next
- Target for Eretria
- Full cluster on BYOH
- Automated k8s components install
- Tanzu CLI integration
## Slack message
- Tag Peter Grant, Shiva Reddy, Steven Gemelos and team in a thread
- How to get involved
- #tkg-bear slack channel
- github code link
---
## Demo
### Prepare environment
```
cat ~/envvars.txt
source ~/envvars.txt
```
### Installing providers
```
clusterctl init \
--core cluster-api:v0.4.0 \
--bootstrap kubeadm:v0.4.0 \
--control-plane kubeadm:v0.4.0 \
--infrastructure vsphere:v0.8.0 \
--infrastructure byoh:v0.4.0 \
--config ~/.cluster-api/dev-repository/config.yaml
```
#### check if all controller pods are up and running
`k get po -A`
### Deploying workload cluster
```
cat ~/cluster-api-provider-vsphere/templates/cluster-template.yaml | envsubst | k apply -f -
```
#### check cluster status
```
clusterctl describe cluster demo-cluster \
--disable-grouping \
--disable-no-echo \
--show-conditions all \
--config ~/.cluster-api/dev-repository/config.yaml
```
### Build and copy artifacts to Ubuntu VM
```
cd ~/cluster-api-provider-byoh
make release-binaries
scp bin/agent-linux-amd64 vmware@10.180.108.237:/home/vmware
scp ~/.kube/config vmware@10.180.108.237:/home/vmware
```
### Prepare the BYO Ubuntu Host
```
sudo su
modprobe overlay
modprobe br_netfilter
cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
apt-get update
apt-get install -y containerd
// iffff you get a lock error
lsof /var/lib/dpkg/lock
lsof /var/lib/apt/lists/lock
lsof /var/cache/apt/archives/lock
kill -9 <process_id>
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
systemctl restart containerd
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt update
apt install kubelet=1.22.0-00 kubeadm=1.22.0-00 kubectl=1.22.0-00
apt-mark hold containerd kubelet kubeadm kubectl
systemctl enable kubelet.service
systemctl enable containerd.service
```
### Apply machine deployment
```
cat ~/cluster-api-provider-byoh/test/e2e/data/infrastructure-provider-byoh/v1alpha4/md.yaml
cat ~/cluster-api-provider-byoh/test/e2e/data/infrastructure-provider-byoh/v1alpha4/md.yaml | envsubst | kubectl apply -f -
```
#### get workload cluster kubeconfig
```
kubectl get secret demo-cluster-kubeconfig -o jsonpath='{.data.value}' | base64 -d > ~/demo-cluster-kubeconfig
k --kubeconfig ~/demo-cluster-kubeconfig get nodes
```
#### apply calico for CNI
```
k --kubeconfig ~/demo-cluster-kubeconfig apply -f ~/calico.yaml
k --kubeconfig ~/demo-cluster-kubeconfig get nodes
```
### Apply nginx deployment
```
k --kubeconfig ~/demo-cluster-kubeconfig apply -f nginx-deployment.yaml
k --kubeconfig ~/demo-cluster-kubeconfig get deployment
k --kubeconfig ~/demo-cluster-kubeconfig get svc
http://10.180.108.237:31026/
```
```
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha4
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cloud-provider: external
name: demo-host
preKubeadmCommands:
- hostname demo-host
- echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts
- echo "127.0.0.1 localhost" >>/etc/hosts
- echo "127.0.0.1 demo-host" >>/etc/hosts
- echo "demo-host" >/etc/hostname
users:
- name: caph
sshAuthorizedKeys:
- '${VSPHERE_SSH_AUTHORIZED_KEY}'
sudo: ALL=(ALL) NOPASSWD:ALL
```