# Jio Redhat Openstack on Openshift
Goal of the project is to enable deployment of Redhat OpenStack on Jio Indradhanus Cloud (Customized Redhat Openshift)
## Goal
We need to deliver RHOSP on Jio-RHOCP by December 10th, referred to as `work`
__Deadline__ 10th December to realize deployment in Jio Cloud Foundry.
## Redhat Expectations
Need Redhat to work independent of Jio; and build the prequisites for the said deployment work
##
@2Ip11oY1QEqczIfe76wb5A please build up the details.
### RHOSP installation and prerequisite
1. Need to have openshift environment of 4.6 or later which need to be hosted on baremetal.
2. To install the RHOSP operator we have to install with the installer-provisioned infrastructure (IPI) or assisted installation (AI) use the "baremetal" platform type and have the "baremetal" cluster Operator enabled. OCP clusters that you install with user-provisioned infrastructure (UPI) use the "none" platform type and might have the baremetal cluster Operator disabled
3. To check if the baremetal cluster Operator is enabled, navigate to Administration > Cluster Settings > ClusterOperators > baremetal, scroll to the "Conditions" section, and view the "Disabled" status.
4. To check the platform type of the OCP cluster, navigate to Administration > Global Configuration > Infrastructure, switch to "YAML" view, scroll to the "Conditions" section, and view the "status.platformStatus" value.
5. Setup should have been installed these two operators OpenShift Virtualization 4.10 and SR-IOV Network Operator 4.10. Below process is the generic process to install the operator from OpenShift GUI.
* In the OpenShift Container Platform web console, click Operators → OperatorHub.
* Select SR-IOV Network/Virtualization Operator from the list of available Operators, and then click Install.
* On the Install Operator page, under Installed Namespace, select Operator recommended Namespace.
* Click Install.
6. Install `opm` by downloading the required package [opm package](https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.10/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC) and run the below commands
```
tar xvf filename
NOTE: To copy the file to debug node use this command oc cp <filename> <debug pod name>:/tmp(location)
echo $PATH
sudo mv ./opm /usr/local/bin
opm version
```
8. Configure a git repository for director operator to store the configuration of overcloud.
9. Create a below persisent volumes to fulfill the following persistent volume claims that the director Operator creates: 4GB for "openstackclient-cloud-admin", 1GB for "openstackclient-hosts" and 50GB for the base image that the director Operator clones for each Controller virtual machine.
10. Download a RedHat Enterprise Linux 8 QCOW2 image from the [redhat image portal](https://access.redhat.com/downloads).
11. Install the "virtctl" and "virt-customize" client tool on workstation or it can be installed on the Redhat Enterprise Linux workstation using below commands.
```
oc get consoleclidownload
oc describe consoleclidownload virtctl-clidownloads-kubevirt-hyperonverged
* choose the required link to download the packages(LInux/Mac/Windows)
tar -xvf virtctl.tar.gz
echo $PATH
mv virtctl /usr/local/bin
```
## Design/Architecture

## Steps for Deployment
1. create a namespace "openstack"
```
oc new-project openstack
```
2. Create a index image and push it to registry.
```
BUNDLE_IMG="registry.redhat.io/rhosp-rhel8/osp-director-operator-bundle:1.3.0-8"
INDEX_IMG="devopsartifact.jio.com/indradhanus__dev__dcr/rhosp-director/osp-director-operator-index:v1.0"
opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman
podman push ${INDEX_IMG}
```
3. Create a file osp-director-operator.yaml and include the below YAML content.
```
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: osp-director-operator-index
namespace: openstack
spec:
sourceType: grpc
image: devopsartifact.jio.com/indradhanus__dev__dcr/rhosp-director/osp-director-operator-index:v1.0
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: "osp-director-operator-group"
namespace: openstack
spec:
targetNamespaces:
- openstack
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: osp-director-operator-subscription
namespace: openstack
spec:
config:
env:
- name: WATCH_NAMESPACE
value: openstack,openshift-machine-api,openshift-sriov-network-operator
source: osp-director-operator-index
sourceNamespace: openstack
name: osp-director-operator
```
4. Run the above yaml file in the "openstack" namespace.
```
oc apply -f osp-director-operator.yaml
```
5. Verify if the director operator is installed sucessfully.
```
oc get operators | grep -i openstack
```
6. To view and describe the director operator CRDs
```
oc get crd | grep -i openstack
oc describe crd "<CRD name>"
```
7. Use the default QCOW2 image to modify with help of "virt-customize" commands as this image doesn't have biosdev predictable network interface names. This image acts as the base operating system for controller virtual machines.
```
sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(kernelopts=.*\)net.ifnames=0 \(.*\)/\1\2/" /boot/grub2/grubenv'
sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(GRUB_CMDLINE_LINUX=.*\)net.ifnames=0 \(.*\)/\1\2/" /etc/default/grub'
```
8. With the help of virtctl we need to upload the image to Openshift Virtualization.
```
virtctl image-upload dv openstack-base-img -n openstack --size=50Gi --image-path=/home/sarathreddy/rhel-isos/rhel-8.2-update-2-x86_64-kvm.qcow2 --storage-class ocs-storagecluster-cephfs --insecure
```
9. Setting the root password for nodes is optional as we can login into the nodes with the SSH keys defined in the "osp-controlplane-ssh-keys" secret.
* Convert the chosen password to base64 value.
```
echo -n "p@ssw0rd!" | base64
cEBzc3cwcmQh
```
* Create a file named "openstack-userpassword.yaml"
```
apiVersion: v1
kind: Secret
metadata:
name: userpassword
namespace: openstack
data:
NodeRootPassword: "cEBzc3cwcmQh"
```
* Create the "userpassword" secret
```
oc create -f openstack-userpassword.yaml -n openstack
```
10. Creating an overcloud control plane network with OpenStackNetConfig.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackNetConfig
metadata:
name: openstacknetconfig
spec:
attachConfigurations:
br-osp:
nodeNetworkConfigurationPolicy:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- bridge:
options:
stp:
enabled: false
port:
- name: ens1f1
description: Linux bridge with enp6s0 as a port
name: br-osp
state: up
type: linux-bridge
mtu: 1500
# optional DnsServers list
dnsServers:
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains:
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName: osptest.test.metalkube.org
networks:
- name: Control
nameLower: ctlplane
subnets:
- name: ctlplane
ipv6:
allocationEnd: 2405:200:5f02:a392:100::112
allocationStart: 2405:200:5f02:a392:100::110
cidr: 2405:200:5f02:a392::0/64
gateway: 2405:200:5f02:a392::1
attachConfiguration: br-osp
# optional: configure static mapping for the networks per nodes. If there is none, a random gets created
reservations:
controller-0:
ipReservations:
ctlplane: 2405:200:5f02:a392:100::110
compute-0:
ipReservations:
ctlplane: 2405:200:5f02:a392:100::111
compute-1:
ipReservations:
ctlplane: 2405:200:5f02:a392:100::112
```
* Create and verify the control plane network using below command
```
oc create -f osnetconfig.yaml -n openstack
oc get openstacknetconfig openstacknetconfig
```
11. Creating a VLAN networks for network isolation with OpenStackNetConfig. Create a file for network configuration which include the resource specification for VLAN network. For example, the specification for internal API, storage, storage mgmt, tenant and external network that manages VLAN-tagged traffic over Linux bridges "br-ex" and "br-osp" connected to "enp6s0" and "enp7s0" Ethernet device on each worker node as below
```
kind: OpenStackNetConfig
metadata:
name: openstacknetconfig
spec:
attachConfigurations:
br-osp:
nodeNetworkConfigurationPolicy:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- bridge:
options:
stp:
enabled: false
port:
- name: ens4f0
description: Linux bridge with enp7s0 as a port
name: br-osp
state: up
type: linux-bridge
mtu: 1500
br-ex:
nodeNetworkConfigurationPolicy:
nodeSelector:
node-role.kubernetes.io/worker: ""
desiredState:
interfaces:
- bridge:
options:
stp:
enabled: false
port:
- name: ens1f1
description: Linux bridge with enp6s0 as a port
name: br-ex
state: up
type: linux-bridge
mtu: 1500
# optional DnsServers list
dnsServers:
- 172.22.0.1
# optional DnsSearchDomains list
dnsSearchDomains:
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName: osptest.test.metalkube.org
networks:
- name: Control
nameLower: ctlplane
subnets:
- name: ctlplane
ipv6:
allocationEnd: 2405:200:5f02:a392:100::112
allocationStart: 2405:200:5f02:a392:100::110
cidr: 2405:200:5f02:a392::0/64
gateway: 2405:200:5f02:a392::1
attachConfiguration: br-osp
- name: InternalApi
nameLower: internal_api
mtu: 1350
subnets:
- name: internal_api
attachConfiguration: br-osp
vlan: 20
ipv4:
allocationEnd: 172.17.0.250
allocationStart: 172.17.0.10
cidr: 172.17.0.0/24
- name: External
nameLower: external
subnets:
- name: external
ipv4:
allocationEnd: 10.0.0.250
allocationStart: 10.0.0.10
cidr: 10.0.0.0/24
gateway: 10.0.0.1
attachConfiguration: br-ex
- name: Storage
nameLower: storage
mtu: 1500
subnets:
- name: storage
ipv4/ipv6:
allocationEnd: 172.18.0.250
allocationStart: 172.18.0.10
cidr: 172.18.0.0/24
vlan: 30
attachConfiguration: br-osp
- name: StorageMgmt
nameLower: storage_mgmt
mtu: 1500
subnets:
- name: storage_mgmt
ipv4:
allocationEnd: 172.19.0.250
allocationStart: 172.19.0.10
cidr: 172.19.0.0/24
vlan: 40
attachConfiguration: br-osp
- name: Tenant
nameLower: tenant
vip: False
mtu: 1500
subnets:
- name: tenant
ipv4:
allocationEnd: 172.20.0.250
allocationStart: 172.20.0.10
cidr: 172.20.0.0/24
vlan: 50
attachConfiguration: br-osp
```
* Save the file once the configuring the network specification is finished. Create the network configuration and verify it with below commands.
```
oc create -f openstacknetconfig.yaml -n openstack
oc get openstacknetconfig/openstacknetconfig -n openstack
oc get openstacknetattach -n openstack
oc get openstacknet -n openstack
oc get nncp
```
12. Adding custom templates to the overcloud configuration. Archive the custom templates into a tarball file so that these templates as a part of overcloud deployment.
* Navigate the location of custom templates (below is example)
```
cd /home/sarathreddy/
vim environmentile.yaml
```
* Updating the sample environment variable. Later needs to be changed accordingly
```
...
data:
network_environment.yaml: |+
resource_registry:
OS::TripleO::Compute::Net::SoftwareConfig: net-config-static-bridge-compute.yaml
cloud_name.yaml: |+
parameter_defaults:
CloudDomain: ocp4.example.com
CloudName: overcloud.ocp4.example.com
CloudNameInternal: overcloud.internalapi.ocp4.example.com
CloudNameStorage: overcloud.storage.ocp4.example.com
CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com
CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com
```
* Archive the template into a tarball
```
tar -cvzf custom-config.tar.gz environmentfile.yaml
```
* Create the "tripleo-tarball-config" ConfigMap and use the tarball as data
```
oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
```
* Verify the ConfigMap of the custom configuration
```
oc get configmap/tripleo-tarball-config -n openstack
```
13. Adding the custom environment files to the overcloud configuration.
NOTE: Create a custom environment files for overcloud deployment
* Create the "heat-env-config" ConfigMap and use the directory that contains the environment files as data
```
oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f -
```
* View the configMaps with below command
```
oc get configmap/heat-env-config -n openstack
```
14. Creating a control plane with OpenStackControlPlane with the help of OpenStackNetConfig resource to create a control plave network and any additional isolated networks with the help of below steps.
```
apiVersion: osp-director.openstack.org/v1beta2
kind: OpenStackControlPlane
metadata:
name: overcloud
namespace: openstack
spec:
gitSecret: git-secret
openStackClientNetworks:
- ctlplane
- internal_api
- external
openStackClientStorageClass: host-nfs-storageclass
passwordSecret: userpassword
caConfigMap: tripleo-tarball-config
virtualMachineRoles:
Controller:
roleName: Controller
roleCount: 1
networks:
- ctlplane
- internal_api
- external
- tenant
- storage
- storage_mgmt
cores: 12
memory: 64
rootDisk:
diskSize: 100
baseImageVolumeName: controller-base-img
# storageClass must support RWX to be able to live migrate VMs
storageClass: host-nfs-storageclass
storageAccessMode: ReadWriteMany
# When using OpenShift Virtualization with OpenShift Container Platform Container Storage,
# specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks.
# With virtual machine disks, RBD block mode volumes are more efficient and provide better
# performance than Ceph FS or RBD filesystem-mode PVCs.
# To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and
# VolumeMode: Block.
storageVolumeMode: Block
# optional configure additional discs to be attached to the VMs,
# need to be configured manually inside the VMs where to be used.
additionalDisks:
- name: datadisk
diskSize: 500
storageClass: host-nfs-storageclass
storageAccessMode: ReadWriteMany
storageVolumeMode: Block
openStackRelease: "16.2"
```
* Create and verify the control plane
```
oc create -f openstack-controller.yaml -n openstack
oc get openstackcontrolplane/overcloud -n openstack
```
* View the openstackvmsets and virtualmachines with the help of below commands
```
oc get openstackvmsets -n openstack
oc get virtualmachines
```
* Test the access of "openstackclient" remote shell with the below command
```
oc rsh -n openstack openstackclient
```
15. Creating a provisioning server with OpenStackProvisionServer which means provisioning server provide a specific RHEL QCOW2 image for provisioning compute node for RHOSP.
* Create a file named "openstack-provision.yaml" and include the resource specification for the Provisioning server. For example, the specification for a Provisioning server using a specific RHEL 8.4 QCOW2 images.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackProvisionServer
metadata:
name: openstack-provision-server
namespace: openstack
spec:
baseImageUrl: http://host/images/rhel-guest-image-8.4-992.x86_64.qcow2
port: 8080
```
* View the specification schema in the "OpenStackProvisionServer" CRD
```
oc describe crd openstackprovisionserver
```
* Create and verify the provisioning server.
```
oc create -f openstack-provision-server.yaml -n openstack
oc get openstackprovisionserver/openstack-provision-server -n openstack
```
23. Creating Compute nodes with OpenStackBaremetalSet. To create this we need to have atleast one compute node in overcloud and can scale the number of compute nodes after deployment.
NOTE: Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks.
* Create a file "openstack-compute.yaml" and include the resource specification for compute nodes. For example, the specification for 1 compute node is a below.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackBaremetalSet
metadata:
name: compute
namespace: openstack
spec:
count: 1
baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2
deploymentSSHSecret: osp-controlplane-ssh-keys
# If you manually created an OpenStackProvisionServer, you can use it here,
# otherwise the director Operator will create one for you (with `baseImageUrl` as the image that it server)
# to use with this OpenStackBaremetalSet
# provisionServerName: openstack-provision-server
ctlplaneInterface: enp2s0
networks:
- ctlplane
- internal_api
- tenant
- storage
roleName: Compute
passwordSecret: userpassword
```
* Set the following values in resource specification
``metadata.name`` --- set the name of compute node bare metal set which is "overcloud"
``metadata.namespace`` --- Set the director operator namespace which is "openshift"
``spec`` --- set the configuration for the compute nodes.
* Describe the values for "openstackbaremetalset" CRD
```
oc describe crd openstackbaremetalset
```
* Create and verify the compute node and also view the bare metal machines that openshift manages to verify the creation of compute nodes.
```
oc create -f openstack-compute.yaml -n openstack
oc get openstackbaremetalset/compute -n openstack
oc get baremetalhosts -n openshift-machine-api
```
24. Creating a Ansible playbook for overcloud configuration with OpenStackConfigGenerator. We need to create a set of Ansible playbooks to configure the RHOSP software on the overcloud nodes. With this playbooks we can convert the heat configuration to playbook by using the "config-dowload" feature in RHOSP.
* Create a "openstack-config-generator.yaml" which include the resource specification to generate the Ansible playbooks.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
name: default
namespace: openstack
spec:
enableFencing: true
gitSecret: git-secret
imageURL: registry.redhat.io/rhosp/openstack-tripleoclient:16.2
heatEnvConfigMap: heat-env-config-deploy
# List of heat environment files to include from tripleo-heat-templates/environments
heatEnvs:
- ssl/tls-endpoints-public-dns.yaml
- ssl/enable-tls.yaml
tarballConfigMap: tripleo-tarball-config-deploy
```
* Create the Ansible config generator and verify it.
```
oc create -f openstack-config-generator.yaml -n openstack
oc get openstackconfiggenerator/default -n openstack
```
25. To Create an ephemeral heat service we need to four specific container images from registry.redhat.io ``openstack-heat-api``, ``openstack-heat-engine``, ``openstack-mariadb`` and ``openstack-rabbitmq``. We can change the source location of these images under the "ephermeralHeatSettings".
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
name: default
namespace: openstack
spec:
…
ephemeralHeatSettings:
heatAPIImageURL: <heat_api_image_location>
heatEngineImageURL: <heat_engine_image_location>
mariadbImageURL: <mariadb_image_location>
rabbitImageURL: <rabbitmq_image-location>
```
26. To Activate the config generation interactive mode we can set the OpenStackConfigGenerator resource to true as below.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
name: default
namespace: openstack
spec:
…
interactive: true
```
27. Tripleo is delivered with the heat environment files for differnt deployment scenarios to which TLS for public endpoints. Heat environment files can be included into the playbook generation using the "heatEnvs" parameter list.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackConfigGenerator
metadata:
name: default
namespace: openstack
spec:
…
heatEnvs:
- ssl/tls-endpoints-public-dns.yaml
- ssl/enable-tls.yaml
```
28. Before the director operator configues the overcloud software on nodes we must register the OS of all nodes to either redhat customer portal or redhat satellite server and needs to be enable repositories for all nodes. Below are the steps to be followed.
* Accessing the remote shell
```
oc rsh openstackclient -n openstack
```
* Change to the "cloud-admin" gome directory
```
cd /home/cloud-admin
```
* Below example script playbook uses the "redhat_subscription" modules for registering the Controller nodes.
```
---
- name: Register Controller nodes
hosts: Controller
become: yes
vars:
repos:
- rhel-8-for-x86_64-baseos-eus-rpms
- rhel-8-for-x86_64-appstream-eus-rpms
- rhel-8-for-x86_64-highavailability-eus-rpms
- ansible-2.9-for-rhel-8-x86_64-rpms
- openstack-16.2-for-rhel-8-x86_64-rpms
- fast-datapath-for-rhel-8-x86_64-rpms
tasks:
- name: Register system
redhat_subscription:
username: myusername
password: p@55w0rd!
org_id: 1234567
release: 8.4
pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd
- name: Disable all repos
command: "subscription-manager repos --disable *"
- name: Enable Controller node repos
command: "subscription-manager repos --enable {{ item }}"
with_items: "{{ repos }}"
```
* By executing the below command we are registering the overcloud nodes to required repositories.
```
ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml
```
29. Different versions of Ansible ploybooks are stored in git and each version of an OpenStackConfigVersion object exists with reference to "hash/digest" of git. Select the "hash/digest" of the latest.
```
oc get -n openstack --sort-by {.metadata.creationTimestamp} osconfigversions -o json
```
30. Configuring the overcloud director operator only after creating the control plane, provisioned your bare metal compute nodes and generate the Ansible playbooks to configure software on each node. When we create an OpenStackDeploy resource then the director operator creates a job that runs the ansible playbooks to configure the overcloud.
* Create a file "openstack-deployment.yaml" which includes the resource specification to Ansible playbooks.
```
apiVersion: osp-director.openstack.org/v1beta1
kind: OpenStackDeploy
metadata:
name: default
namespace: openstack
spec:
configVersion: n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h…
configGenerator: default
```
NOTE: Values can be changed with the help of "openstackdeploy" CRD. Describe the CRD and can get the required details.
* Executing the above file and checking the logs to watch the Ansible playbooks.
```
oc create -f openstack-deployment.yaml
oc logs -f jobs/deploy-openstack-default
```
31. Accessing an overcloud deployed with the director operator by accessing the OpenStackClient pod
* Access the remote shell for "openstackclient" abd change to "cloud-admin" home directory.
```
oc rsh -n openstack openstackclient
cd /home/cloud-admin
```
* Run the "openstack" commands. For example we are creating a default network with the following command
```
openstack network create default
```
32. Accessing the overcloud dasboard by accessing the IP address of overcloud host name or public VIP.
* To login as "admin" user, obtain the admin password from the "AdminPassword" parameter in "tripleo-passwords" secret by running the below command
```
oc get secret tripleo-passwords -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d
```
* Open the web browser
* Enter the host name or public VIP of overcloud dashboard in URL
* Log in to the dashboard with the chosen username and password.
33. Scaling the compute nodes with the director operator by adding the comoute nodes to overcloud with director operator.
* Modify the yaml configuration for the "compute" OpenStackBaremetalSet and increate "count" parameter for resource.
```
oc patch osbms compute --type=merge --patch '{"spec":{"count":3}}' -n openstack
```
* OpenStackBaremetalSet resource automatically provision new nodes with Linux OS. Can check the new nodes by running the below commands.
```
oc get baremetalhosts -n openshift-machine-api
oc get openstackbaremetalset
```
34. Removing the compute nodes from overcloud with the director operator
* Access the remote shell "openstackclient"
```
oc rsh -n openstack openstackclient
```
* Identify the compute nodes that needs to be removed and disable the compute services on the node to prevent the node from scheduling new instances.
```
openstack compute service list
openstack compute service set <hostname> nova-compute --disable
```
* Exit from "openstackclient" and annotate the BareMetalHost resources that corresponds the nodes that we need to remove "osp-director.openstack.org/delete-host=true" annotation.
```
exit
oc get osbms compute -o json | jq '.status.baremetalHosts | to_entries[] | "\(.key) => \(.value | .hostRef)"'
"compute-0, openshift-worker-3"
"compute-1, openshift-worker-4"
oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite
```
* Below output shows that the nodes are annotated and ready to be removed.
```
oc get osbms compute -o json -n openstack | jq .status
{
"baremetalHosts": {
"compute-0": {
"annotatedForDeletion": true,
"ctlplaneIP": "192.168.25.105/24",
"hostRef": "openshift-worker-3",
"hostname": "compute-0",
"networkDataSecretName": "compute-cloudinit-networkdata-openshift-worker-3",
"provisioningState": "provisioned",
"userDataSecretName": "compute-cloudinit-userdata-openshift-worker-3"
},
"compute-1": {
"annotatedForDeletion": false,
"ctlplaneIP": "192.168.25.106/24",
"hostRef": "openshift-worker-4",
"hostname": "compute-1",
"networkDataSecretName": "compute-cloudinit-networkdata-openshift-worker-4",
"provisioningState": "provisioned",
"userDataSecretName": "compute-cloudinit-userdata-openshift-worker-4"
}
},
"provisioningStatus": {
"readyCount": 2,
"reason": "All requested BaremetalHosts have been provisioned",
"state": "provisioned"
}
}
```
* By modifying the YAML configuration for "compute" OpenStackBaremetalSet resource and decrease the "count" paremeter for the resource.
```
oc patch osbms compute --type=merge --patch '{"spec":{"count":1}}' -n openstack
```
* Below output shows that the director operator deletes the corresponding IP reservation from OpenStackIPSet and OpenStackNetConfig for the node. The Director operator flags the IP reservation entry in OpenStackNet resource as deleted.
```
oc get osnet ctlplane -o json -n openstack | jq .status.roleReservations.compute
{
"addToPredictableIPs": true,
"reservations": [
{
"deleted": true,
"hostname": "compute-0",
"ip": "192.168.25.105",
"vip": false
},
{
"deleted": false,
"hostname": "compute-1",
"ip": "192.168.25.106",
"vip": false
}
]
}
```
* Now access the remote shell "openstackclient" and remove the compute service entries from the overcloud
```
oc rsh openstackclient -n openstack
openstack compute service list
openstack compute service delete <service-id>
```
* Check the compute network agents entries in overcloud and remove them if exists and exit from the "openstackclient".
```
openstack network agent list
for AGENT in $(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete $AGENT ; done
exit
```
## Remarks
TBD