owned this note changed 4 years ago
Published Linked with GitHub
tags: Training

Lab deployment with TripleO-Quickstart

Table of Contents

Beginners Guide

If you are a total beginner to this, start here!

  1. Visit https://docs.openstack.org/tripleo-quickstart/latest/getting-started.html
  2. Visit https://docs.openstack.org/tripleo-docs/latest/ci/index.html

Requirements

  1. A testbox with the following minimum requirements

    a. 8 core cpu, 12 GB memory, 60GB freespace

    b. CentOS-7, CentOS-8, RHEL-8 virthost

Lab setup for TripleO OpenStack Train release


ssh to your testbox or VirtHost

ssh root@virthost

Note: you can use the root or non-root user. Non-root users should have sudo access

Create a user

useradd oooq echo "Redhat123" | passwd --stdin oooq # Needed for quickstart.sh --install-deps echo "oooq ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/oooq sudo chmod 0440 /etc/sudoers.d/oooq su - oooq

As the user create ssh keys

ssh-keygen -f ~/.ssh/id_rsa -t rsa -N '' ssh-copy-id root@127.0.0.2

Provision

In this example we'll be using a custom workspace

Note: the default workspace is $HOME/.quickstart

sudo yum -y install git mkdir ~/git cd ~/git git clone https://opendev.org/openstack/tripleo-quickstart.git cd tripleo-quickstart # optional, tooling will do this mkdir /var/tmp/bootcamp

Download quickstart.sh and execute ( CentOS-7 Host, train and earlier)

./quickstart.sh -R centosci/train-current-tripleo --teardown all --no-clone --tags all --nodes config/nodes/1ctlr_1comp.yml -p quickstart.yml -w /var/tmp/bootcamp/ 127.0.0.2

Download quickstart.sh and execute ( RHEL8 or CentOS8 Host, Ussuri+)

./quickstart.sh -R tripleo-ci/CentOS-8/master --teardown all --no-clone --tags all --nodes config/nodes/1ctlr_1comp.yml -p quickstart.yml -w /var/tmp/bootcamp/ -e undercloud_local_interface=ens4 127.0.0.2

TripleO Quickstart has created a stack user and the undercloud and overcloud qcow2 files are located in /home/stack

undercloud_local_interface=ens4 refers to the interface on the host which will be used to connect all VM's to. This should not be the default interface on the host. Depending on your host setup, you may need to use a different value.

Ensure you can now connect to the undercloud

ssh -F /var/tmp/bootcamp/ssh.config.ansible undercloud ls # inspect the ssh.config.ansible to see the ssh configuration cat ssh.config.ansible

The libvirt vm's are running under the stack user. This allows users to continue to run vm's as root untouched.

Check the vm's ( trust but verify )

exit; # to root su - stack virsh list --all exit; # back to oooq cd /var/tmp/bootcamp

You now have a running vm for the undercloud. The undercloud has all or most of the required rpms installed for the undercloud install. You may have noticed overcloud images in the stack users home directory on the undercloud node.

Undercloud Deployment

Execute the Undercloud Deployment ( CentOS-7 train and ealier )

./quickstart.sh -R centosci/train-current-tripleo --no-clone --tags all --nodes config/nodes/1ctlr_1comp.yml -I --teardown none -p quickstart-extras-undercloud.yml -w /var/tmp/bootcamp/ 127.0.0.2

Execute the Undercloud Deployment ( CentOS-8 / RHEL8 ussuri+ )

tls certs are not found and the deployment fails here: https://bugs.launchpad.net/tripleo/+bug/1871703

./quickstart.sh -R tripleo-ci/CentOS-8/master --no-clone --tags all --nodes config/nodes/1ctlr_1comp.yml -I --teardown none -p quickstart-extras-undercloud.yml -w /var/tmp/bootcamp/ -e undercloud_local_interface=ens4 127.0.0.2

The logs for the undercloud install can be found in /home/stack/install-undercloud.log on the undercloud node.

less /home/stack/install-undercloud.log

Note: You can find the configuration for the containers deployment in /home/stack/containers-prepare-parameter.yaml

Verify the undercloud deployment

# check if the openstack services are running in containers sudo podman ps

Container preparation logs are here:

cat /var/log/tripleo-container-image-prepare.log

Inspect the TripleO Undercloud

# check if the openstack client is responding properly source stackrc; openstack user list; openstack role list; # check baremetal nodes and stacks openstack server list; openstack baremetal node list openstack stack list

Note: You should see users and roles, but no servers, nodes or stacks should be populated quite yet :)

Backup your Environment

Backup your undercloud deployment

sudo su - su - stack virsh list --all

You should see:

[stack@localhost ~]$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     undercloud                     running
 -     compute_0                      shut off
 -     control_0                      shut off

You'll notice that all the libvirt images, config etc.. are defined under $HOME/.config/libvirt

Begin the backup:

sudo su - su - stack virsh autostart --disable undercloud virsh shutdown undercloud # wait for the undercloud to stop virsh list --all cd /home/stack/ tar -cvf qs_backup.tar .config/libvirt/ pool/ volume_pool.xml id_rsa*

You should have about a 20G backup file

rw-rw-r--. 1 stack stack 20G Jan 11 15:55 qs_backup.tar

We have a bug here:

virsh list --all
error: Cannot create user runtime directory '/run/user/1003/libvirt': Permission denied

Workaround:

sudo chown stack:stack /run/user/
virsh list --all

You should now see all three vms in shutdown state

Restart your Undercloud:

virsh start undercloud virsh list --all sudo su - su - oooq cd /var/tmp/bootcamp ssh -F ~/.quickstart/ssh.config.ansible undercloud

Perhaps you want to rerun the some openstack commands to reverify everything is working OK. See the above openstack commands..

Prepare the undercloud for the overcloud deployment

These steps will register baremetal nodes (vms) for the overcloud, prepare openstack flavors and setup the deployment configs and commands.

./quickstart.sh -R centosci/train-current-tripleo --no-clone --tags all --nodes config/nodes/1ctlr_1comp.yml -I --teardown none -p quickstart-extras-overcloud-prep.yml -w /var/tmp/bootcamp/ 127.0.0.2

Let's see what this step has done.

Overcloud Nodes

First openstack flavors were created for the TripleO Overcloud

(undercloud) [stack@undercloud ~]$ openstack flavor list | grep oooq | 1ad9280c-244e-4718-9348-bff027ab09b1 | oooq_control | 8192 | 49 | 0 | 2 | True | | 5619a7f8-3189-41d3-8b9d-06c0f6e09bb9 | oooq_objectstorage | 8192 | 49 | 0 | 2 | True | | 86eed00d-c25f-437a-84aa-9703a8805c59 | oooq_compute | 8192 | 49 | 0 | 2 | True | | 95453485-25f6-4d42-b7ac-abc1f571693a | oooq_blockstorage | 8192 | 49 | 0 | 2 | True | | 962f6ae2-c8c6-4b6c-8cc8-bd865f314c2d | oooq_ceph | 8192 | 49 | 0 | 2 | True

Next we uploaded the overcloud images found in /home/stack

openstack image list +--------------------------------------+------------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------------+--------+ | 13be39e4-5fe0-4c12-aa87-30294d19076e | overcloud-full | active | | 3839a09b-781f-43e4-9b48-97d8289ef132 | overcloud-full-initrd | active | | d134be58-2f34-44d0-844b-e3d0267a0a1f | overcloud-full-vmlinuz | active | +--------------------------------------+------------------------+--------+

The images were uploaded with the script in /home/stack/overcloud-prep-images.sh on the undercloud node.

The libvirt overcloud nodes were then introspected and setup by the ironic service

openstack baremetal node list +--------------------------------------+-----------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-----------+---------------+-------------+--------------------+-------------+ | 0ff3a9c6-6bda-41c0-89de-9d030e08eb4b | control-0 | None | power off | available | False | | 1153ef81-6f81-4ddc-9f4b-a6ffa099a86c | compute-0 | None | power off | available | False | +--------------------------------------+-----------+---------------+-------------+--------------------+-------------+

The log of the work here can be found in /home/stack/overcloud_prep_images.log on the undercloud node.

Preparing the Undercloud Network

cat /home/stack/overcloud-prep-network.sh

You see the pattern here, tripleo-quickstart will create the shell scripts required for the particular deployment and then also execute the scripts and log the result. This also allows you to view and update the script when walking through this manually.

sudo ovs-vsctl show 19265519-b180-434c-9162-8d6d1b979d21 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-ctlplane Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "vlan10" tag: 10 Interface "vlan10" type: internal Port phy-br-ctlplane Interface phy-br-ctlplane type: patch options: {peer=int-br-ctlplane} Port br-ctlplane Interface br-ctlplane type: internal Port "eth1" Interface "eth1"
ip a show dev br-ctlplane 7: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:9f:b2:f3:f1:fb brd ff:ff:ff:ff:ff:ff inet 192.168.24.1/24 brd 192.168.24.255 scope global br-ctlplane valid_lft forever preferred_lft forever inet 192.168.24.3/32 scope global br-ctlplane valid_lft forever preferred_lft forever inet 192.168.24.2/32 scope global br-ctlplane valid_lft forever preferred_lft forever inet6 fe80::29f:b2ff:fef3:f1fb/64 scope link valid_lft forever preferred_lft forever
ip a show dev vlan10 9: vlan10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 72:85:19:cd:6f:10 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global vlan10 valid_lft forever preferred_lft forever inet6 fe80::7085:19ff:fecd:6f10/64 scope link valid_lft forever preferred_lft forever

DNS

cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-script nameserver 192.168.23.1

The overcloud deployment heat parameters:

cat network-environment.yaml { "parameter_defaults": { "ControlPlaneDefaultRoute": "192.168.24.1", "ControlPlaneSubnetCidr": "24", "DnsServers": [ "192.168.23.1" ], "EC2MetadataIp": "192.168.24.1", "ExternalAllocationPools": [ { "end": "10.0.0.250", "start": "10.0.0.4" } ], "ExternalInterfaceDefaultRoute": "10.0.0.1", "ExternalNetCidr": "10.0.0.1/24", "NeutronExternalNetworkBridge": "", "PublicVirtualFixedIPs": [ { "ip_address": "10.0.0.5" } ] } }

SSL Configuration

The TripleO Overcloud will be deployed with SSL, documentation

To read through the details of the configuration, do the following.

cat overcloud_create_ssl_cert.log ls *ca* *.pem cat enable-tls.yaml

Validations ( pre-deployment )

This is an optional step but one that might prove handy. Documentation

source stackrc openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["pre-deployment"]}' openstack task execution list | grep -i SUCCESS |wc -l 99 openstack task execution list | grep -i ERROR | 1e6b8005-98fc-4015-97f9-c1c0d7015f00 | send_message | tripleo.validations.v1.run_groups | | 689d40f1-cc13-449a-bbe6-b92ef3b2dc5e | ERROR | Workflow failed due to me... | 2020-01-12 17:08:51 | 2020-01-12 17:08:54 |

To get the details of any failure, and note some can be waived as this is not a production deployment. Do the following.

openstack task execution show 1e6b8005-98fc-4015-97f9-c1c0d7015f00 -f shell

Finally deploy the TripleO Overcloud

./quickstart.sh -R centosci/train-current-tripleo --no-clone --tags all --nodes config/nodes/1ctlr_1comp.yml -I --teardown none -p quickstart-extras-overcloud.yml -w /var/tmp/bootcamp/ 127.0.0.2

Now your TripleO Overcloud is deploying

If you need to change something in the deployment or restart it remember you can restore from your backup and execute the prep and deployment again.

# logged in as the stack user
virsh destroy undercloud
virsh destroy compute_0
virsh destroy control_0
sudo systemctl stop libvirtd
tar -xvf qs_backup.tar
sudo systemctl start libvirtd
sudo chown stack:stack /run/user/  < need to fix >
virsh start undercloud

Let's see what the deployment commands and output look like.

Deployment Commands:

# on the undercloud cat overcloud-deploy.sh # note the stackrc file. cat /home/stack/stackrc source /home/stack/stackrc # double checks the nodes are available openstack hypervisor stats show -c count -f value # deployment command openstack overcloud deploy \ --override-ansible-cfg /home/stack/custom_ansible.cfg \ --templates /usr/share/openstack-tripleo-heat-templates \ --libvirt-type qemu \ --timeout 90 \ -e /home/stack/cloud-names.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e /home/stack/overcloud_storage_params.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml \ -e /home/stack/enable-tls.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml \ -e /home/stack/inject-trust-anchor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ --validation-warnings-fatal \ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/ci/environments/ovb-ha.yaml

Deployment Logs:

# most of the steps are logged here:
cat overcloud_deploy.log

# The work done to prepare the containers is done here:
/var/log/tripleo-container-image-prepare.log

# There is now a new openstack config file.
cat overcloudrc
source overcloudrc

TripleO stands for OpenStack on Openstack. Let's see if that is really the case :)

source stackrc openstack server list +--------------------------------------+-------------------------+--------+------------------------+----------------+-----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------------------------+--------+------------------------+----------------+-----------+ | b6ec8533-2104-4014-93bc-4ca0fa50da68 | overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.28 | overcloud-full | baremetal | | 78494959-a57f-4768-8915-180ed4840866 | overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.24.30 | overcloud-full | baremetal | +--------------------------------------+-------------------------+--------+------------------------+----------------+-----------+ # ssh to the compute node ssh heat-admin@192.168.24.30 sudo podman ls # looks like nova is running on the compute :) # are there any instances running in nova? exit source overcloudrc openstack server list # nope, not yet.

Validating the TripleO Overcloud Services

This will be shown in a later slide deck

CONGRATS, you made it through \0/

FAQ

Find this document incomplete? Leave a comment!

tags: Templates Documentation
Select a repo