# Hands-on with Kubernetes Native Infrastructure (KNI) & OpenShift Virtualization
## Testing with OpenShift 4.11
Link: https://github.com/RHFieldProductManagement/baremetal-ipi-lab
## may need to date install config for 4.11 still testing
```
vi scripts/install-config.yaml
```
* Example: https://github.com/tosin2013/baremetal-ipi-lab/blob/master/04-deploying-cluster.md
* Link: https://docs.openshift.com/container-platform/4.11/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html#configuring-the-install-config-file_ipi-install-installation-workflow
### Download the qubinode installer
```
git clone https://github.com/tosin2013/qubinode-installer.git
cd qubinode-installer
```
### Update the all.yml to work on RHPDS
```
vi samples/all.yml
#####
# RHPDS Settings
run_on_rhpds: yes
run_kni_lab_on_rhpds: yes
```
### Configure ansible
> Enter an upstream DNS server or press [ENTER] for the default [1.1.1.1]:10.20.0.2
>
```
./qubinode-installer -m setup
./qubinode-installer -m rhsm
./qubinode-installer -m ansible
```
### confiure the machine to work with qubinode
> This can be automated in the future if needed
```
vi playbooks/vars/kvm_host.yml
# should a bridge interface be created
configure_bridge: false
# Set to no prevent the installer from attempting
# setup a LVM group for qubinode. Also set this to no
# if you already have you storage for lvm setup
create_lvm: no
./qubinode-installer -m host
```
### You may now Access Cockpit
* https://provision.g2tgb.dynamic.opentlc.com:9090/
### Confgure OpenShit Latest Version
```
./qubinode-installer -p ipilab -m configure_latest_ocp_client
```
### Install Required Packages
```
./qubinode-installer -p ipilab -m install_packages
```
### Configure Disconnected Registry
```
./qubinode-installer -p ipilab -m configure_disconnected_repo
```
### Configure ironic metal3 pod
```
./qubinode-installer -p ipilab -m configure_ironic_pod
```
### Configure pull secrets and certs
```
./qubinode-installer -p ipilab -m configure_pull_secret_and_certs
```
### Mirror Registry
```
./qubinode-installer -p ipilab -m mirror_registry
```
### Download OCP images
```
./qubinode-installer -p ipilab -m download_ocp_images
```
### Shutdown Hosts
> **NOTE:** These commands may fail (Unable to set Chassis Power Control to Down/Off), if the load on the underlying infrastructure is too high. If this happens, simply re-run the "script" until it succeeds for all nodes.
```
./qubinode-installer -p ipilab -m shutdown_hosts
```
### Create OpensHift Cluster
[Creating an OpenShift Cluster](https://github.com/RHFieldProductManagement/baremetal-ipi-lab/blob/master/04-deploying-cluster.md)
```
$ tmux new-session -s install-openshift
$ ./qubinode-installer -p ipilab -m start_openshift_installation
```
### Destory Cluster
```
./qubinode-installer -p ipilab -m destroy_openshift_installation
./qubinode-installer -p ipilab -m shutdown_hosts
./qubinode-installer -p ipilab -m cleanup_deploy
```
### Troubleshooting
**Watch VM console**
```
$ ssh lab-user@hypervisor
$ sudo virsh list --all
$ virsh console lhwfc-kpgdk-bootstrap
```
**Watch bootstrap logs**
```
$ ssh core@10.20.0.115
$ journalctl -b -f -u release-image.service -u bootkube.service
```
**Watch boot info of instance**
```
openstack --os-cloud=$OSP_PROJECT console url show master-2
```

**If you are unable to ssh into bastion host**
* Open Ticket with support it is a dns issues when the cluster automatically shuts down.
* restart podman pods
* get GUID if restarting steps
```
export GUID=lhwfc
```

### TESTING
### Server health check
```
openstack --os-cloud=$OSP_PROJECT server list
openstack --os-cloud=$OSP_PROJECT volume list
```
```
openstack --os-cloud=$OSP_PROJECT volume create --size 120 master-2-volume
openstack --os-cloud=$OSP_PROJECT server add volume --device /dev/vdb
openstack --os-cloud=$OSP_PROJECT server resume master-2
```
### Add volumes
```
openstack --os-cloud=$OSP_PROJECT server add volume --device /dev/vdb master-0 master-0-volume
openstack --os-cloud=$OSP_PROJECT server add volume --device /dev/vdb master-1 master-1-volume
openstack --os-cloud=$OSP_PROJECT server add volume --device /dev/vdb master-2 master-2-volume
```
cat ~/scripts/ocp-install.sh | grep OSP_PROJECT | head -1
openstack --os-cloud=$OSP_PROJECT server list
openstack --os-cloud=$OSP_PROJECT server unrescue master-0
openstack --os-cloud=$OSP_PROJECT server stop master-0
openstack --os-cloud=$OSP_PROJECT server list
openstack --os-cloud=$OSP_PROJECT server start master-0