# vSphere RHEL-8.6 Base OS from ISO **
## vSphere Prerequisites
## Bastion Node Creation (vSphere VM)
**1. Download RHEL-8.6 boot ISO to local host**
```
https://access.redhat.com/downloads/content/479/ver=/rhel---8/8.6/x86_64/product-software
```
You will need to authenticate with your Red Hat Developer credentials. Make sure to keep them handy as you will need them later.
**2. Copy the ISO to a datastore that is acessible by all ESX nodes in the cluster**
**3. Create a "New Virtual Machine":**
from the Vsphere web UI right click on cluster you plan to use for DKP and select Create a "New Virtual Machine"
* Name: rhel-86vm
* Select the "Datacenter" you want to deploy DKP on
* Select the "vCenter Cluster" you want to deploy DKP on
* Select the "Datastore" you plan to use for DKP
* select "ESXI 7 and later" compatibility
* select "OS family" Linux
* select "OS Version" Red Hat Enterprise Linux 8 (64-bit)
* Hardware Configuration
* CPU 4
* Memory 16GB
* New Hard disk 80GB
* New CD/DVD Drive Datastore ISO file
* Select Rhel-8.6-x86_64-boot.iso
* select the Connect the CD/DVD Drive box
**4. Once the OS boots open a virtual console and configure the following items:**
* Manual Drive partioning
* remove /home
* delete swap and add free space to / should be ~ 119GB
* Set the time zone
* Add the RHEL subscription
*select minimal install without any additional packages
*select autmatic configuration for network IP (DHCP)
* Start the install
* Create a new admin user "builder" password "builder"
* allow install to finish then reboot
* allow passwordless sudo for wheel goup
* sudo vi /etc/sudoers
* uncomment #
* %wheel ALL=(all) NOPASSWD:ALL
**5. Once the base os vm boots copy the SSH key you plan to use for install and KIB**
```
ssh-copy-id -i /root/.ssh/id_rsa root@IP
```
**6. Poweroff the vm and Take a snapshot**
* right click on the base os vm
* select snapshots
* the take snapshot and name it root
## Create OVA Template for Cluster Nodes
**7. Clone the Base-RHEL-OS to a VM**
* Right click on base-rhel-os select clone to vm
* select 4vcpus 8GB RAM and a 150GB vdisk
* make sure to check and adjust MAC adress so there isn't conflict
**8. Once the VM boots, open a ssh terminal to host and install the tools and packages**
`sudo yum install -y yum-utils bzip2`
**9. Install Docker (Only on Bastion Host)**
Add the repo for upstream docker...
```
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
```
...and install Docker**
```
sudo yum install -y docker-ce docker-ce-cli containerd.io
```
**10. Create directory for Konvoy Image builder and DKP CLI**
```
mkdir kib && mkdir dkp
```
**11. Get the needed D2iQ Software**
Download/Decompress Konvoy Image builder...
```
cd kib
wget https://github.com/mesosphere/konvoy-image-builder/releases/download/v1.24.2/konvoy-image-bundle-v1.24.2_linux_amd64.tar.gz
tar -xvf konvoy-image-bundle-v1.24.2_linux_amd64.tar.gz
```
...and DKP CLI
```
cd..
cd dkp
wget https://downloads.d2iq.com/dkp/v2.4.0/dkp_v2.4.0_linux_amd64.tar.gz
tar -xvf dkp_v2.4.0_linux_amd64.tar.gz
```
**12. Create a folder and resource pool in vCenter for DKP cluster**
Folder
Right click on the datacenter
* select New Folder
* select host and cluster folder
* name folder "D2IQ"
Resource Pool
Right click on the vCenter cluster you plan to use for DKP
* select New Resource Pool
* Adjust values if you need to restrict resources for DKP
**13. build template using konvoy image builder**
````
cd ..
cd kib/images/ova
vi rhel-86.yaml
````
**14. Adjust the packer file for your Vsphere cluster**
````
---
download_images: true
build_name: "vsphere-rhel-86"
packer_builder_type: "vsphere"
guestinfo_datasource_slug: "https://raw.githubusercontent.com/vmware/cloud-init-vmware-guestinfo"
guestinfo_datasource_ref: "v1.4.0"
guestinfo_datasource_script: "{{guestinfo_datasource_slug}}/{{guestinfo_datasource_ref}}/install.sh"
packer:
vcenter_server: "10.0.1.52"
vsphere_username: "administrator@vsphere.local"
vsphere_password: "Password"
cluster: "cluster1"
datacenter: "dc1"
datastore: "vsanDatastore"
folder: "d2iq"
insecure_connection: "true"
network: "VM Network"
resource_pool: "D2IQ"
template: "rhel86vm"
vsphere_guest_os_type: "rhel8_64Guest"
guest_os_type: "rhel8-64"
# goss params
distribution: "RHEL"
distribution_version: "8.6"
````
**15. Create overides for docker credentials**
Navigate to the directorey with konvoy-image...
```
cd..
cd..
```
...and create the overrides file
```
vi overrides.yaml
image_registries_with_auth:
- host: "registry-1.docker.io"
username: "dkptestdrive"
password: "43bfb7f5-de67-4fb0-a9a7-32dd3bdcc2a6"
auth: ""
identityToken: ""
```
**16. Build template using konvoy image builder**
`
```
./konvoy-image build images/ova/rhel-86.yaml --overrides overrides.yaml
```
## Create DKP cluster on vSphere
**17. Export your vSphere Environment Variables**
copy the below set of "exports" to a text document in a notepad so that you can modify them and copy/paste into the terminal. Save for later referance or use.
```
export VSPHERE_SERVER="<vcenter-server-ip-address>"
export VSPHERE_PASSWORD='<password>'
export VSPHERE_USERNAME="<administrator@vsphere.local>"
export CLUSTER_NAME=dkp
openssl s_client -connect "vcenter_ip":443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
```
**18. Build the Bootstrap Cluster**
```
cd ..
cd dkp
./dkp create bootstrap
```
**20. Create the DKP cluster deplotment yaml**
If you are not using self-signed certificates, remove the last line of the command`--tls-thumb-print=` below
```
dkp create cluster vsphere --cluster-name="dkp" --network="VM Network" --control-plane-endpoint-host="<vip_for_api" --virtual-ip-interface="eth0" --data-center="<dc1>" --data-store="vsanDatastore" --folder="${VSPHERE_FOLDER}" --server="<vsphere_server_ip>" --ssh-public-key-file=/root/.ssh/id_rsa.pub --resource-pool="DKP" --vm-template=konvoy-ova-vsphere-rhel-84-1.22.8-1649344885 --tls-thumb-print="tls-thumprint" --dry-run -o yaml > ${DKP_CLUSTER_NAME}.yaml
```
**22. Adjust the template size for the Controlplane and worker pool in the cluster.yaml**
```
vi cluster.yaml
/numCPUs
adjust this block of text to have 4 cpus and 16GB ram
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: icekube2-md-0
namespace: default
spec:
template:
spec:
cloneMode: linkedClone
datacenter: kompton
datastore: vsanDatastore
diskGiB: 120
folder: d2iq
memoryMiB: 16384
network:
devices:
- dhcp4: true
networkName: VM Network
numCPUs: 4
resourcePool: DKP
server: 10.0.1.52
template: konvoy-ova-vsphere-rhel-84-1.22.8-1649344885
thumbprint: B1:79:49:36:BF:2A:FE:93:0E:ED:BD:3D:B6:DD:B1:E9:36:51:03:63
---
Adjust the workers to 8 cpus and 32GB ram
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: icekube2-md-0
namespace: default
spec:
template:
spec:
cloneMode: linkedClone
datacenter: kompton
datastore: vsanDatastore
diskGiB: 120
folder: d2iq
memoryMiB: 32768
network:
devices:
- dhcp4: true
networkName: VM Network
numCPUs: 8
resourcePool: DKP
server: 10.0.1.52
template: konvoy-ova-vsphere-rhel-84-1.22.8-1649344885
thumbprint: B1:79:49:36:BF:2A:FE:93:0E:ED:BD:3D:B6:DD:B1:E9:36:51:03:63
---
```
**23. Deploy the cluster
```
kubectl create -f ${DKP_CLUSTER_NAME}.yaml
```
**24. Watch Your Cluster Build**
```
dkp describe cluster -c ${DKP_CLUSTER_NAME}
```
**25. Extract the kubeconfig and deploy a config map for metallb
```
dkp get kubeconfig -c ${DKP_CLUSTER_NAME} > ${DKP_CLUSTER_NAME}.conf
```
**26. Deploy MetalLB Configuration
```
cat <<EOF > metallb.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.0.0.50-10.0.0.51
EOF
kubectl --kubeconfig ${DKP_CLUSTER_NAME}.conf apply -f metallb.yaml
```
**27. Pivot the Cluster Controllers
Create CAPI Controllers on Cluster...
```
./dkp create capi-components --kubeconfig ${DKP_CLUSTER_NAME}.conf
```
...and move the configuration to the cluster
```
./dkp move --to-kubeconfig ${DKP_CLUSTER_NAME}.conf
```
You Now have a Self-Managing Kubernetes Cluster deployed on vSphere
**28. Adjust the Storage class to only allow PVs on a specfic VMware datastore**
```
kubectl delete sc vsphere-raw-block-sc --kubeconfig ${DKP_CLUSTER_NAME}.conf
```
Create a storage class yaml with the URL of the VMware datastore you want to use
```
vi sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: vsphere-raw-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true
parameters:
datastoreurl: "ds:///vmfs/volumes/vsan:5238a205736fdb1f-c71f7ec7a0353662/"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```
Apply the Storage class yaml to create a new default SC
```
kubectl apply -f sc.yaml --kubeconfig ${DKP_CLUSTER_NAME}.conf
```
## Kommander Deployment
**29 Deploy Kommander to the DKP Cluster**
```
./dkp install kommander --kubeconfig ${DKP_CLUSTER_NAME}.conf
```
**30 Watch the Helm Releases Deploy**
```
watch kubectl get hr -A --kubeconfig ${DKP_CLUSTER_NAME}.conf
dkp open dashboard