Sarath Reddy
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Jio Redhat Openstack on Openshift Goal of the project is to enable deployment of Redhat OpenStack on Jio Indradhanus Cloud (Customized Redhat Openshift) ## Goal We need to deliver RHOSP on Jio-RHOCP by December 10th, referred to as `work` __Deadline__ 10th December to realize deployment in Jio Cloud Foundry. ## Redhat Expectations Need Redhat to work independent of Jio; and build the prequisites for the said deployment work ## @2Ip11oY1QEqczIfe76wb5A please build up the details. ### RHOSP installation and prerequisite 1. Need to have openshift environment of 4.6 or later which need to be hosted on baremetal. 2. To install the RHOSP operator we have to install with the installer-provisioned infrastructure (IPI) or assisted installation (AI) use the "baremetal" platform type and have the "baremetal" cluster Operator enabled. OCP clusters that you install with user-provisioned infrastructure (UPI) use the "none" platform type and might have the baremetal cluster Operator disabled 3. To check if the baremetal cluster Operator is enabled, navigate to Administration > Cluster Settings > ClusterOperators > baremetal, scroll to the "Conditions" section, and view the "Disabled" status. 4. To check the platform type of the OCP cluster, navigate to Administration > Global Configuration > Infrastructure, switch to "YAML" view, scroll to the "Conditions" section, and view the "status.platformStatus" value. 5. Setup should have been installed these two operators OpenShift Virtualization 4.10 and SR-IOV Network Operator 4.10. Below process is the generic process to install the operator from OpenShift GUI. * In the OpenShift Container Platform web console, click Operators → OperatorHub. * Select SR-IOV Network/Virtualization Operator from the list of available Operators, and then click Install. * On the Install Operator page, under Installed Namespace, select Operator recommended Namespace. * Click Install. 6. Install `opm` by downloading the required package [opm package](https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.10/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC) and run the below commands ``` tar xvf filename NOTE: To copy the file to debug node use this command oc cp <filename> <debug pod name>:/tmp(location) echo $PATH sudo mv ./opm /usr/local/bin opm version ``` 8. Configure a git repository for director operator to store the configuration of overcloud. 9. Create a below persisent volumes to fulfill the following persistent volume claims that the director Operator creates: 4GB for "openstackclient-cloud-admin", 1GB for "openstackclient-hosts" and 50GB for the base image that the director Operator clones for each Controller virtual machine. 10. Download a RedHat Enterprise Linux 8 QCOW2 image from the [redhat image portal](https://access.redhat.com/downloads). 11. Install the "virtctl" and "virt-customize" client tool on workstation or it can be installed on the Redhat Enterprise Linux workstation using below commands. ``` oc get consoleclidownload oc describe consoleclidownload virtctl-clidownloads-kubevirt-hyperonverged * choose the required link to download the packages(LInux/Mac/Windows) tar -xvf virtctl.tar.gz echo $PATH mv virtctl /usr/local/bin ``` ## Design/Architecture ![](https://i.imgur.com/nHD6TlH.png) ## Steps for Deployment 1. create a namespace "openstack" ``` oc new-project openstack ``` 2. Create a index image and push it to registry. ``` BUNDLE_IMG="registry.redhat.io/rhosp-rhel8/osp-director-operator-bundle:1.3.0-8" INDEX_IMG="devopsartifact.jio.com/indradhanus__dev__dcr/rhosp-director/osp-director-operator-index:v1.0" opm index add --bundles ${BUNDLE_IMG} --tag ${INDEX_IMG} -u podman --pull-tool podman podman push ${INDEX_IMG} ``` 3. Create a file osp-director-operator.yaml and include the below YAML content. ``` apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: osp-director-operator-index namespace: openstack spec: sourceType: grpc image: devopsartifact.jio.com/indradhanus__dev__dcr/rhosp-director/osp-director-operator-index:v1.0 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: "osp-director-operator-group" namespace: openstack spec: targetNamespaces: - openstack --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: osp-director-operator-subscription namespace: openstack spec: config: env: - name: WATCH_NAMESPACE value: openstack,openshift-machine-api,openshift-sriov-network-operator source: osp-director-operator-index sourceNamespace: openstack name: osp-director-operator ``` 4. Run the above yaml file in the "openstack" namespace. ``` oc apply -f osp-director-operator.yaml ``` 5. Verify if the director operator is installed sucessfully. ``` oc get operators | grep -i openstack ``` 6. To view and describe the director operator CRDs ``` oc get crd | grep -i openstack oc describe crd "<CRD name>" ``` 7. Use the default QCOW2 image to modify with help of "virt-customize" commands as this image doesn't have biosdev predictable network interface names. This image acts as the base operating system for controller virtual machines. ``` sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(kernelopts=.*\)net.ifnames=0 \(.*\)/\1\2/" /boot/grub2/grubenv' sudo virt-customize -a <local path to image> --run-command 'sed -i -e "s/^\(GRUB_CMDLINE_LINUX=.*\)net.ifnames=0 \(.*\)/\1\2/" /etc/default/grub' ``` 8. With the help of virtctl we need to upload the image to Openshift Virtualization. ``` virtctl image-upload dv openstack-base-img -n openstack --size=50Gi --image-path=/home/sarathreddy/rhel-isos/rhel-8.2-update-2-x86_64-kvm.qcow2 --storage-class ocs-storagecluster-cephfs --insecure ``` 9. Setting the root password for nodes is optional as we can login into the nodes with the SSH keys defined in the "osp-controlplane-ssh-keys" secret. * Convert the chosen password to base64 value. ``` echo -n "p@ssw0rd!" | base64 cEBzc3cwcmQh ``` * Create a file named "openstack-userpassword.yaml" ``` apiVersion: v1 kind: Secret metadata: name: userpassword namespace: openstack data: NodeRootPassword: "cEBzc3cwcmQh" ``` * Create the "userpassword" secret ``` oc create -f openstack-userpassword.yaml -n openstack ``` 10. Creating an overcloud control plane network with OpenStackNetConfig. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: ens1f1 description: Linux bridge with enp6s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv6: allocationEnd: 2405:200:5f02:a392:100::112 allocationStart: 2405:200:5f02:a392:100::110 cidr: 2405:200:5f02:a392::0/64 gateway: 2405:200:5f02:a392::1 attachConfiguration: br-osp # optional: configure static mapping for the networks per nodes. If there is none, a random gets created reservations: controller-0: ipReservations: ctlplane: 2405:200:5f02:a392:100::110 compute-0: ipReservations: ctlplane: 2405:200:5f02:a392:100::111 compute-1: ipReservations: ctlplane: 2405:200:5f02:a392:100::112 ``` * Create and verify the control plane network using below command ``` oc create -f osnetconfig.yaml -n openstack oc get openstacknetconfig openstacknetconfig ``` 11. Creating a VLAN networks for network isolation with OpenStackNetConfig. Create a file for network configuration which include the resource specification for VLAN network. For example, the specification for internal API, storage, storage mgmt, tenant and external network that manages VLAN-tagged traffic over Linux bridges "br-ex" and "br-osp" connected to "enp6s0" and "enp7s0" Ethernet device on each worker node as below ``` kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: ens4f0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: "" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: ens1f1 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 172.22.0.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv6: allocationEnd: 2405:200:5f02:a392:100::112 allocationStart: 2405:200:5f02:a392:100::110 cidr: 2405:200:5f02:a392::0/64 gateway: 2405:200:5f02:a392::1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api attachConfiguration: br-osp vlan: 20 ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1500 subnets: - name: storage ipv4/ipv6: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 vlan: 30 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1500 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 vlan: 40 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1500 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 vlan: 50 attachConfiguration: br-osp ``` * Save the file once the configuring the network specification is finished. Create the network configuration and verify it with below commands. ``` oc create -f openstacknetconfig.yaml -n openstack oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattach -n openstack oc get openstacknet -n openstack oc get nncp ``` 12. Adding custom templates to the overcloud configuration. Archive the custom templates into a tarball file so that these templates as a part of overcloud deployment. * Navigate the location of custom templates (below is example) ``` cd /home/sarathreddy/ vim environmentile.yaml ``` * Updating the sample environment variable. Later needs to be changed accordingly ``` ... data: network_environment.yaml: |+ resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: net-config-static-bridge-compute.yaml cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com ``` * Archive the template into a tarball ``` tar -cvzf custom-config.tar.gz environmentfile.yaml ``` * Create the "tripleo-tarball-config" ConfigMap and use the tarball as data ``` oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack ``` * Verify the ConfigMap of the custom configuration ``` oc get configmap/tripleo-tarball-config -n openstack ``` 13. Adding the custom environment files to the overcloud configuration. NOTE: Create a custom environment files for overcloud deployment * Create the "heat-env-config" ConfigMap and use the directory that contains the environment files as data ``` oc create configmap -n openstack heat-env-config --from-file=~/custom_environment_files/ --dry-run=client -o yaml | oc apply -f - ``` * View the configMaps with below command ``` oc get configmap/heat-env-config -n openstack ``` 14. Creating a control plane with OpenStackControlPlane with the help of OpenStackNetConfig resource to create a control plave network and any additional isolated networks with the help of below steps. ``` apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: gitSecret: git-secret openStackClientNetworks: - ctlplane - internal_api - external openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword caConfigMap: tripleo-tarball-config virtualMachineRoles: Controller: roleName: Controller roleCount: 1 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 12 memory: 64 rootDisk: diskSize: 100 baseImageVolumeName: controller-base-img # storageClass must support RWX to be able to live migrate VMs storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany # When using OpenShift Virtualization with OpenShift Container Platform Container Storage, # specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. # With virtual machine disks, RBD block mode volumes are more efficient and provide better # performance than Ceph FS or RBD filesystem-mode PVCs. # To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and # VolumeMode: Block. storageVolumeMode: Block # optional configure additional discs to be attached to the VMs, # need to be configured manually inside the VMs where to be used. additionalDisks: - name: datadisk diskSize: 500 storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Block openStackRelease: "16.2" ``` * Create and verify the control plane ``` oc create -f openstack-controller.yaml -n openstack oc get openstackcontrolplane/overcloud -n openstack ``` * View the openstackvmsets and virtualmachines with the help of below commands ``` oc get openstackvmsets -n openstack oc get virtualmachines ``` * Test the access of "openstackclient" remote shell with the below command ``` oc rsh -n openstack openstackclient ``` 15. Creating a provisioning server with OpenStackProvisionServer which means provisioning server provide a specific RHEL QCOW2 image for provisioning compute node for RHOSP. * Create a file named "openstack-provision.yaml" and include the resource specification for the Provisioning server. For example, the specification for a Provisioning server using a specific RHEL 8.4 QCOW2 images. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackProvisionServer metadata: name: openstack-provision-server namespace: openstack spec: baseImageUrl: http://host/images/rhel-guest-image-8.4-992.x86_64.qcow2 port: 8080 ``` * View the specification schema in the "OpenStackProvisionServer" CRD ``` oc describe crd openstackprovisionserver ``` * Create and verify the provisioning server. ``` oc create -f openstack-provision-server.yaml -n openstack oc get openstackprovisionserver/openstack-provision-server -n openstack ``` 23. Creating Compute nodes with OpenStackBaremetalSet. To create this we need to have atleast one compute node in overcloud and can scale the number of compute nodes after deployment. NOTE: Use the OpenStackNetConfig resource to create a control plane network and any additional isolated networks. * Create a file "openstack-compute.yaml" and include the resource specification for compute nodes. For example, the specification for 1 compute node is a below. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: compute namespace: openstack spec: count: 1 baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys # If you manually created an OpenStackProvisionServer, you can use it here, # otherwise the director Operator will create one for you (with `baseImageUrl` as the image that it server) # to use with this OpenStackBaremetalSet # provisionServerName: openstack-provision-server ctlplaneInterface: enp2s0 networks: - ctlplane - internal_api - tenant - storage roleName: Compute passwordSecret: userpassword ``` * Set the following values in resource specification ``metadata.name`` --- set the name of compute node bare metal set which is "overcloud" ``metadata.namespace`` --- Set the director operator namespace which is "openshift" ``spec`` --- set the configuration for the compute nodes. * Describe the values for "openstackbaremetalset" CRD ``` oc describe crd openstackbaremetalset ``` * Create and verify the compute node and also view the bare metal machines that openshift manages to verify the creation of compute nodes. ``` oc create -f openstack-compute.yaml -n openstack oc get openstackbaremetalset/compute -n openstack oc get baremetalhosts -n openshift-machine-api ``` 24. Creating a Ansible playbook for overcloud configuration with OpenStackConfigGenerator. We need to create a set of Ansible playbooks to configure the RHOSP software on the overcloud nodes. With this playbooks we can convert the heat configuration to playbook by using the "config-dowload" feature in RHOSP. * Create a "openstack-config-generator.yaml" which include the resource specification to generate the Ansible playbooks. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: enableFencing: true gitSecret: git-secret imageURL: registry.redhat.io/rhosp/openstack-tripleoclient:16.2 heatEnvConfigMap: heat-env-config-deploy # List of heat environment files to include from tripleo-heat-templates/environments heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml tarballConfigMap: tripleo-tarball-config-deploy ``` * Create the Ansible config generator and verify it. ``` oc create -f openstack-config-generator.yaml -n openstack oc get openstackconfiggenerator/default -n openstack ``` 25. To Create an ephemeral heat service we need to four specific container images from registry.redhat.io ``openstack-heat-api``, ``openstack-heat-engine``, ``openstack-mariadb`` and ``openstack-rabbitmq``. We can change the source location of these images under the "ephermeralHeatSettings". ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: … ephemeralHeatSettings: heatAPIImageURL: <heat_api_image_location> heatEngineImageURL: <heat_engine_image_location> mariadbImageURL: <mariadb_image_location> rabbitImageURL: <rabbitmq_image-location> ``` 26. To Activate the config generation interactive mode we can set the OpenStackConfigGenerator resource to true as below. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: … interactive: true ``` 27. Tripleo is delivered with the heat environment files for differnt deployment scenarios to which TLS for public endpoints. Heat environment files can be included into the playbook generation using the "heatEnvs" parameter list. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: … heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml ``` 28. Before the director operator configues the overcloud software on nodes we must register the OS of all nodes to either redhat customer portal or redhat satellite server and needs to be enable repositories for all nodes. Below are the steps to be followed. * Accessing the remote shell ``` oc rsh openstackclient -n openstack ``` * Change to the "cloud-admin" gome directory ``` cd /home/cloud-admin ``` * Below example script playbook uses the "redhat_subscription" modules for registering the Controller nodes. ``` --- - name: Register Controller nodes hosts: Controller become: yes vars: repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16.2-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms tasks: - name: Register system redhat_subscription: username: myusername password: p@55w0rd! org_id: 1234567 release: 8.4 pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd - name: Disable all repos command: "subscription-manager repos --disable *" - name: Enable Controller node repos command: "subscription-manager repos --enable {{ item }}" with_items: "{{ repos }}" ``` * By executing the below command we are registering the overcloud nodes to required repositories. ``` ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml ``` 29. Different versions of Ansible ploybooks are stored in git and each version of an OpenStackConfigVersion object exists with reference to "hash/digest" of git. Select the "hash/digest" of the latest. ``` oc get -n openstack --sort-by {.metadata.creationTimestamp} osconfigversions -o json ``` 30. Configuring the overcloud director operator only after creating the control plane, provisioned your bare metal compute nodes and generate the Ansible playbooks to configure software on each node. When we create an OpenStackDeploy resource then the director operator creates a job that runs the ansible playbooks to configure the overcloud. * Create a file "openstack-deployment.yaml" which includes the resource specification to Ansible playbooks. ``` apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: default namespace: openstack spec: configVersion: n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h… configGenerator: default ``` NOTE: Values can be changed with the help of "openstackdeploy" CRD. Describe the CRD and can get the required details. * Executing the above file and checking the logs to watch the Ansible playbooks. ``` oc create -f openstack-deployment.yaml oc logs -f jobs/deploy-openstack-default ``` 31. Accessing an overcloud deployed with the director operator by accessing the OpenStackClient pod * Access the remote shell for "openstackclient" abd change to "cloud-admin" home directory. ``` oc rsh -n openstack openstackclient cd /home/cloud-admin ``` * Run the "openstack" commands. For example we are creating a default network with the following command ``` openstack network create default ``` 32. Accessing the overcloud dasboard by accessing the IP address of overcloud host name or public VIP. * To login as "admin" user, obtain the admin password from the "AdminPassword" parameter in "tripleo-passwords" secret by running the below command ``` oc get secret tripleo-passwords -o jsonpath='{.data.tripleo-overcloud-passwords\.yaml}' | base64 -d ``` * Open the web browser * Enter the host name or public VIP of overcloud dashboard in URL * Log in to the dashboard with the chosen username and password. 33. Scaling the compute nodes with the director operator by adding the comoute nodes to overcloud with director operator. * Modify the yaml configuration for the "compute" OpenStackBaremetalSet and increate "count" parameter for resource. ``` oc patch osbms compute --type=merge --patch '{"spec":{"count":3}}' -n openstack ``` * OpenStackBaremetalSet resource automatically provision new nodes with Linux OS. Can check the new nodes by running the below commands. ``` oc get baremetalhosts -n openshift-machine-api oc get openstackbaremetalset ``` 34. Removing the compute nodes from overcloud with the director operator * Access the remote shell "openstackclient" ``` oc rsh -n openstack openstackclient ``` * Identify the compute nodes that needs to be removed and disable the compute services on the node to prevent the node from scheduling new instances. ``` openstack compute service list openstack compute service set <hostname> nova-compute --disable ``` * Exit from "openstackclient" and annotate the BareMetalHost resources that corresponds the nodes that we need to remove "osp-director.openstack.org/delete-host=true" annotation. ``` exit oc get osbms compute -o json | jq '.status.baremetalHosts | to_entries[] | "\(.key) => \(.value | .hostRef)"' "compute-0, openshift-worker-3" "compute-1, openshift-worker-4" oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite ``` * Below output shows that the nodes are annotated and ready to be removed. ``` oc get osbms compute -o json -n openstack | jq .status { "baremetalHosts": { "compute-0": { "annotatedForDeletion": true, "ctlplaneIP": "192.168.25.105/24", "hostRef": "openshift-worker-3", "hostname": "compute-0", "networkDataSecretName": "compute-cloudinit-networkdata-openshift-worker-3", "provisioningState": "provisioned", "userDataSecretName": "compute-cloudinit-userdata-openshift-worker-3" }, "compute-1": { "annotatedForDeletion": false, "ctlplaneIP": "192.168.25.106/24", "hostRef": "openshift-worker-4", "hostname": "compute-1", "networkDataSecretName": "compute-cloudinit-networkdata-openshift-worker-4", "provisioningState": "provisioned", "userDataSecretName": "compute-cloudinit-userdata-openshift-worker-4" } }, "provisioningStatus": { "readyCount": 2, "reason": "All requested BaremetalHosts have been provisioned", "state": "provisioned" } } ``` * By modifying the YAML configuration for "compute" OpenStackBaremetalSet resource and decrease the "count" paremeter for the resource. ``` oc patch osbms compute --type=merge --patch '{"spec":{"count":1}}' -n openstack ``` * Below output shows that the director operator deletes the corresponding IP reservation from OpenStackIPSet and OpenStackNetConfig for the node. The Director operator flags the IP reservation entry in OpenStackNet resource as deleted. ``` oc get osnet ctlplane -o json -n openstack | jq .status.roleReservations.compute { "addToPredictableIPs": true, "reservations": [ { "deleted": true, "hostname": "compute-0", "ip": "192.168.25.105", "vip": false }, { "deleted": false, "hostname": "compute-1", "ip": "192.168.25.106", "vip": false } ] } ``` * Now access the remote shell "openstackclient" and remove the compute service entries from the overcloud ``` oc rsh openstackclient -n openstack openstack compute service list openstack compute service delete <service-id> ``` * Check the compute network agents entries in overcloud and remove them if exists and exit from the "openstackclient". ``` openstack network agent list for AGENT in $(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete $AGENT ; done exit ``` ## Remarks TBD

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully