### Baremetal Hardware
We have some interconnected hardwares which we used for traditional tripleo deployments.
:::warning
Really old infra - warranty expired 6 years ago)
:::
### Current status of baremetal servers
* EnvA(Dell) - 17.1/9
* EnvB(Dell) - 17.0/9 (Removed this week after a hardware failure on 1 node)
* EnvC Decommissioned - got New Baremetal - Not used anywhere, not inter connected.
* EnvD(HP) - 16.2/8
* EnvE(HP) - Not used anywhere. (But environment is good, no hardware issues till last time when we ran this environment)
## Documentation about baremetal jobs
https://docs.openstack.org/tripleo-docs/latest/ci/baremetal_jobs.html

# Network Topology of our infra

# Where to find actual vlan/network info?
- https://code.engineering.redhat.com/gerrit/plugins/gitiles/tripleo-environments/+/refs/heads/master/hardware_environments/
- Separate directory for each baremetal env
- https://code.engineering.redhat.com/gerrit/plugins/gitiles/tripleo-environments/+/refs/heads/master/hardware_environments/dell_fc430_envA/network_configs/single_nic_vlans/env_settings.yml
# Job definition
* Parent job: https://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-jobs.git;a=blob;f=zuul.d/baremetal-jobs.yaml;h=4e64fdede531b9f378b96b5d5cdcfd767381cae2;hb=HEAD#l2
* https://code.engineering.redhat.com/gerrit/gitweb?p=openstack/tripleo-ci-internal-jobs.git;a=blob;f=zuul.d/baremetal-jobs.yaml;h=4e64fdede531b9f378b96b5d5cdcfd767381cae2;hb=HEAD#l138
We use zuul `Semaphore`: Semaphores can be used to restrict the number of certain jobs which are running at the same time.
Ref: https://zuul-ci.org/docs/zuul/latest/config/semaphore.html
# How to access environment?
Environment is available even after job finishes
key: https://code.engineering.redhat.com/gerrit/plugins/gitiles/tripleo-environments/+/refs/heads/master/keys/beaker/rdoci-jenkins
enva:
`ssh -i ~/.ssh/id_rsa_rdoci-jenkins root@rdo-ci-fx2-04-s8.v102.rdoci.lab.eng.rdu2.redhat.com`
envd:
`ssh -i ~/.ssh/id_rsa_rdoci-jenkins root@rdoci-hp-01.v100.rdoci.lab.eng.rdu2.redhat.com`
`undercloud` alias will take you to undercloud vm
~~~
[root@rdoci-hp-01 ~]# undercloud
Activate the web console with: systemctl enable --now cockpit.socket
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
Last login: Thu Sep 28 05:09:59 2023 from 192.168.23.1
[zuul@undercloud ~]$
~~~
# Reprovision kvm step(only needed when something went wrong or you want to move to different rhel version)
If you ever had to reprovision kvm box - you can use reprovision it using PXE from lab PXE server or via virtually mounting DVD ISO
you will need following on blank kvm box after reprovisioning
~~~
1. Copy jenkins key
$ ssh-copy-id -i ~/.ssh/id_rsa_rdoci-jenkins root@rdo-ci-fx2-04-s8.v102.rdoci.lab.eng.rdu2.redhat.com
2. Install repos
dnf install -y http://download.lab.bos.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/rhos-release-latest.noarch.rpm
rhos-release -u -r 9.2
3. Install cert
dnf install -y http://hdn.corp.redhat.com/rhel8-csb/RPMS/noarch/redhat-internal-cert-install-0.1-31.el7.noarch.rpm
4. Install libvirt and update
dnf update -y
dnf install libvirt -y
reboot if kernel update
5. Add following ports in firewall and reload firewall
firewall-cmd --add-service tftp --zone=public --permanent
firewall-cmd --add-service dhcp --zone=public --permanent
firewall-cmd --add-service dns --zone=public --permanent
firewall-cmd --add-service dhcpv6 --zone=public --permanent
firewall-cmd --zone=libvirt --add-port=69/tcp --permanent
firewall-cmd --zone=libvirt --add-port=69/udp --permanent
firewall-cmd --zone=libvirt --add-port=67/udp --permanent
firewall-cmd --zone=libvirt --add-port=67/tcp --permanent
firewall-cmd --zone=libvirt --add-port=68/tcp --permanent
firewall-cmd --zone=libvirt --add-port=68/udp --permanent
firewall-cmd --reload
6. Troubleshooting - if after reprovision you hit introspection/provisiong issue
Check if eno2 is connected to brovc bridge, once/twice i hit that issue and fixed that by connecting eno2 under brovc bridge using RHEL console GUI.
~~~
# Some ideas how we can utilize older hardware envb/enve
