# Ruck and Rover notes #34
###### tags: `ruck_rover`
:::info
Important links for ruck rover's [ruck/rover links to help](https://hackmd.io/07z0xroHTFi2IbX93P5ZfQ)
**Ruck Rover - Unified Sprint #34**
Dates: Sep 30 - Oct 20
Tripleo CI team ruck|rover: soniya/rlandy/wes
OSP CI team ruck|rover: afazekas(Attila), vgriner (Vadim)
:::
[TOC]
---
## on-going issues
:::danger
## TripleO
Unisprint 33 RR notes: https://hackmd.io/7Q0YO5JKS0agcf9qwoD4IQ?both
### gate
* Bug: standalone upgrade train failing: https://bugs.launchpad.net/tripleo/+bug/1896595
* Note that job is non-voting so not very high priority.
ipv6 issue - http://paste.openstack.org/show/799170/ and timestamp issue - http://paste.openstack.org/show/799171/
### periodic / 3rd party
- master
- Bug: https://bugs.launchpad.net/tripleo/+bug/1897863 (not a promotion blocker)
- https://review.opendev.org/#/c/755252/ (Add support for rdo openvswitch layered upgrade special treatment.)
- Except standalone upgrade master job all jobs are **GREEN**
- ci.centos introspection failure (master and ussuri)
- https://bugs.launchpad.net/tripleo/+bug/1900158
- https://github.com/redhat-cip/hardware/pull/150
- promoted 10/20
- ussuri
- Except standalone upgrade ussuri job all jobs are **GREEN**
- promoted 10/19
- train
- https://bugs.launchpad.net/tripleo/+bug/1900443 **centos-7** train jobs are failing to find variables for OS - os-tempest
- https://review.opendev.org/#/c/758823/
- Make sure train **centos 8** kicks
- promoted 10/19
- stein
- promoted 10/20
- centos-7-ovb-3ctlr_1comp_1supp-featureset039-stein job failing
- https://bugs.launchpad.net/tripleo/+bug/1898536(removed from criteria)
- rocky
- promoted 10/20
- queens
- https://bugs.launchpad.net/tripleo/+bug/1900357
- fs020 queens is failing tempest tests - test_floatingip.FloatingIPQosTest and test_server_multi_create_auto_allocate
### General Reviews/Fixes
openstack-tox-molecule jobs:
https://bugs.launchpad.net/tripleo/+bug/1900033
## OSP
:::
## Oct 20th
### TripleO
opendev DOWN
status updated above
---
## Oct 19th
### TripleO
* check
tripleo-ci-centos-8-content-provider job fails in train and ussuri - https://bugs.launchpad.net/tripleo/+bug/1899904
ipv6 issue - http://paste.openstack.org/show/799170/ and timestamp issue - http://paste.openstack.org/show/799171/ please copy stuff from pastebin to hackmd
periodic-tripleo-ci-centos-7-standalone-stein job fails because tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_port_security_macspoofing_port fails in stein inspite of test moved in skiplist.
https://bugs.launchpad.net/tripleo/+bug/1899980
* gate
* periodic / 3rd party
**master**
fs020 failed
ci.centos introspection failure
https://bugs.launchpad.net/tripleo/+bug/1900158
**ussuri**
promoted yesterday
ci.centos introspection error
**train**
https://bugs.launchpad.net/tripleo/+bug/1900443 "centos-7 train jobs are failing to find variables for OS - os-tempest"
https://review.opendev.org/#/c/758823/
**stein**
tempest issue - https://bugs.launchpad.net/tripleo/+bug/1899980
**rocky**
did not run container builds
**queens**
fs020 failed
---
## Oct 15th
### TripleO
* check
tripleo-ci-centos-8-content-provider job fails in train and ussuri - https://bugs.launchpad.net/tripleo/+bug/1899904
periodic-tripleo-ci-centos-7-standalone-stein job fails because tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_port_security_macspoofing_port fails in stein inspite of test moved in skiplist.
https://bugs.launchpad.net/tripleo/+bug/1899980
* gate
openstack-tox-molecule jobs:
https://bugs.launchpad.net/tripleo/+bug/1900033
vexxhost fs039 ..
TASK [ovb-manage : Create a key in case of extranode]
Quota exceeded, too many key pairs. (HTTP 403) (Request-ID: req-2cb3932c-8791-4d8a-ad86-162a41ce2b45)
I checked the tenant - only looks like we have three key pairs.
* periodic / 3rd party
**master**
fs020 failed
ci.centos introspection failure
https://bugs.launchpad.net/tripleo/+bug/1900158
**ussuri**
promoted yesterday
ci.centos introspection error
**train**
promoted 10/13
**stein**
tempest issue - https://bugs.launchpad.net/tripleo/+bug/1899980
**rocky**
did not run container builds
**queens**
fs020 failed
---
## Oct 13th
### TripleO
* check
* gate
* periodic / 3rd party
**master**
https://bugs.launchpad.net/tripleo/+bug/1899627
fs035 jobs are failing in master to deploy the overcloud - "AnsibleUndefinedVariable: 'external_gateway_ip'
**ussuri**
seeing a lot of Write Failure: HTTPSConnectionPool(host='trunk.registry.rdoproject.org', port=443): Read timed out. errors
**train**
promted 10/13
**stein**
promoted yesterday
**rocky**
runs on weekend
**queens**
runs on weekend
---
## Oct 9th
### TripleO
* check
* gate
~~https://bugs.launchpad.net/tripleo/+bug/1899054
openstack-tox-molecule is failing the gate - 'module' object has no attribute 'which'~~
validations failures
<rlandy|rover> ubuntu-bionic | "msg": "No package matching 'gettext' is available"
<rlandy|rover> gchamoul: https://zuul.openstack.org/status openstack/tripleo-validations
<fungi> rlandy|rover: yes, one of our larger regions had a problem with its mirror server, we've taken it out of rotation for the time being
<fungi> rechecking should be safe at this point, we're not booting new nodes there for now
* periodic / 3rd party
**master**
https://bugs.launchpad.net/tripleo/+bug/1898931
periodic-tripleo-ci-centos-8-scenario003-standalone-master is failing standalone deploy "Service mistral_executor has not started yet"
https://review.opendev.org/#/c/756590/
waiting on merge here
two tempest failures ( rerunning - one in standlaone scenario and one in fs020)
neutron_tempest_plugin.scenario.test_floatingip.FloatingIpMultipleRoutersTest
neutron_tempest_plugin.scenario.test_floatingip.FloatingIPQosTest
**ussuri**
**train**
**stein**
promoted yesterday
**rocky**
runs on weekend
**queens**
runs on weekend
---
## Oct 8th
### TripleO
* check
* gate
https://bugs.launchpad.net/tripleo/+bug/1899054
openstack-tox-molecule is failing the gate - 'module' object has no attribute 'which'
* periodic / 3rd party
**master**
https://bugs.launchpad.net/tripleo/+bug/1898931
periodic-tripleo-ci-centos-8-scenario003-standalone-master is failing standalone deploy "Service mistral_executor has not started yet"
https://review.opendev.org/#/c/756590/
**ussuri**
**train**
**stein**
looking for rerun today
fix for stein tempest failures: https://review.opendev.org/#/c/755555/
removed periodic-tripleo-ci-centos-7-standalone-upgrade-stein from criteria due to: https://review.opendev.org/#/c/756152/
**rocky**
runs on weekend
**queens**
ran and passed yesterday
---
## Oct 7th
### TripleO
* check
* gate
* periodic / 3rd party
**master**
https://bugs.launchpad.net/tripleo/+bug/1898931
promoted
**ussuri**
promoted
**train**
promoted
**stein**
centos-7 featureset039-stein job is failing consistently due to unable to establish ssh connection - https://bugs.launchpad.net/tripleo/+bug/1898536
centos-7 multinode featureset-39 job is failing consistently due to failure of octavia_tempest_plugin.tests.scenario.v2.test_load_balancer.LoadBalancerScenarioTest.test_load_balancer_ipv4_CRUD - https://bugs.launchpad.net/tripleo/+bug/1898539
fix for stein tempest failures: https://review.opendev.org/#/c/755555/
removed periodic-tripleo-ci-centos-7-standalone-upgrade-stein from criteria due to: https://review.opendev.org/#/c/756152/
**rocky**
runs tomorrow
**queens**
runs tomorrow
---
## Oct 5th
### TripleO
* check
* gate
include gcc in bindep, for tests https://review.opendev.org/755885
openstack-tox-py36 failures
* periodic / 3rd party
**master**
**ussuri**
**train**
third party centos8-train is stable across third-party openstack-periodic-integration-stable2
ovb jobs are failing inconsitently waiting for the next re-run
https://bugs.launchpad.net/tripleo/+bug/1898046
Centos-8 Train OVB jobs are failing tempest - Failed to establish authenticated ssh connection to cirros
**stein**
centos-7 featureset039-stein job is failing consistently due to unable to establish ssh connection - https://bugs.launchpad.net/tripleo/+bug/1898536
centos-7 multinode featureset-38 job is failing consistently due to failure of octavia_tempest_plugin.tests.scenario.v2.test_load_balancer.LoadBalancerScenarioTest.test_load_balancer_ipv4_CRUD - https://bugs.launchpad.net/tripleo/+bug/1898539
**rocky**
**queens**
## Oct 2nd
### TripleO
* check
* gate
include gcc in bindep, for tests https://review.opendev.org/755885
openstack-tox-py36 failures
* periodic / 3rd party
**master**
except standlone upgrade master other all jobs are good
**ussuri**
Except standalone upgrade ussuri job all jobs are **GREEN**
rerun on periodic-tripleo-ci-centos-8-scenario000-multinode-oooq-container-updates-ussuri
**train**
ovb jobs are failing inconsitently waiting for the next re-run
https://bugs.launchpad.net/tripleo/+bug/1898046
Centos-8 Train OVB jobs are failing tempest - Failed to establish authenticated ssh connection to cirros
**stein**
Checking upgrades status
**rocky**
Needs rerun
**queens**
Promoted
## Oct 1st
### TripleO
* check
** failing molecule in stable/train:
https://opendev.org/openstack/tripleo-ansible/src/branch/stable/train/zuul.d/playbooks/pre.yml#L54-L60
Emilien creating LP for it - need to disable GPG check there
related review: https://review.opendev.org/#/c/752630/
[train-only] install docker.com gpg key https://review.opendev.org/755506
See error in: https://review.opendev.org/#/c/754460/
* gate
* periodic / 3rd party
**master**
third party centos8-master is all good
**ussuri**
third party centos8-ussuri is all healthy including ovb jobs
**train**
third party centos8-train is stable across third-party openstack-periodic-integration-stable2
ovb jobs are failing inconsitently waiting for the next re-run
https://bugs.launchpad.net/tripleo/+bug/1898046
Centos-8 Train OVB jobs are failing tempest - Failed to establish authenticated ssh connection to cirros
**stein**
<holser> marios tempest failed due to bad image
<akahat> weshay|ruck, no patches are yet merged: https://review.rdoproject.org/r/#/c/28081/
<holser> we need to merge https://review.opendev.org/#/c/755401/ https://review.opendev.org/#/c/755402/
<holser> then I'll recheck https://review.opendev.org/#/c/755220/
<holser> and then stein and train jobs should be fixed
<holser> ussuri is a different story
<holser> we mess up networking during upgrade as we run deploy script from ansible operator and upgrade script from quickstart-extra
<weshay|ruck> holser, ya.. waiting for a +1 from ci
**rocky**
**queens**
### OSP
---
## Sept 30th
### TripleO
* gate
* periodic / 3rd party
**master**
- Bug: https://bugs.launchpad.net/tripleo/+bug/1897863 (not a promotion blocker)
- https://review.opendev.org/#/c/755252/ (Add support for rdo openvswitch layered upgrade special treatment.)
- Except standalone upgrade master all jobs are **GREEN**
**ussuri**
- Except standalone upgrade ussuri job all jobs are **GREEN**
**train**
- we are getting NODE_FAILURE on periodic-tripleo-centos-8-train-promote-promoted-components-to-tripleo-ci-testing. waiting for next run.
### OSP