ruck_rover
rlandy to check below failures:-
periodic-tripleo-ci-centos-9-scenario004-standalone-master TIMED_OUT 3h 09m 09s
periodic-tripleo-ci-centos-9-standalone-full-tempest-scenario-master FAILURE 2h 28m 48s
periodic-tripleo-ci-centos-9-scenario010-kvm-internal-standalone-master
Rerunning failed jobs: https://review.rdoproject.org/r/c/testproject/+/39357 https://review.rdoproject.org/r/c/tripleo-downstream-trigger-nested-virt/+/42965
Master overcloud deployment failing with 'Node' object has no attribute 'uuid'
Rerunning failed jobs here
Rerunning here https://review.rdoproject.org/r/c/testproject/+/26273
Rerunning failed jobs here
Rerunning here: https://review.rdoproject.org/r/c/testproject/+/26273
Bunch of jobs failed:
periodic-tripleo-ci-centos-8-standalone-full-tempest-api-victoria fails on different tempest errors all the time,
sometimes it fails on LP bug #1960310
periodic-tripleo-ci-centos-8-standalone-full-tempest-scenario-victoria fails on different tempest and non-tempest errors all the time
periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-victoria dito
periodic-tripleo-ci-centos-8-ovb-1ctlr_2comp-featureset020-victoria dito
Jobs keep failing because of MySQL errors in various/random services,
example
Job periodic-tripleo-ci-centos-8-standalone-full-tempest-scenario-victoria
might be failing because of an real issue:
2022-05-18T05:59:27.276336835+00:00 stderr F 2022-05-18T05:59:27Z|00021|patch|WARN|Bridge 'br-tenant' not found for network 'tenant'
log example 1,
log example 2.
But maybe it is also related to MySQL issues mentioned above because errors are showing up here as well:
https://logserver.rdoproject.org/openstack-periodic-integration-stable2/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-scenario-victoria/986574c/logs/undercloud/var/log/containers/neutron/server.log.txt.gz
Same for periodic-tripleo-ci-centos-8-standalone-upgrade-victoria.
Jobs periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-victoria
and
periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset035-victoria
failed twice with:
2022-05-21 08:44:18.734207 | primary | ERROR: Cannot install oslo.utils because these package versions have conflicting dependencies.
2022-05-21 08:44:18.734262 | primary |
2022-05-21 08:44:18.734277 | primary | The conflict is caused by:
2022-05-21 08:44:18.734291 | primary | The user requested oslo.utils
2022-05-21 08:44:18.734303 | primary | The user requested (constraint) oslo-utils===4.6.1
2022-05-21 08:44:18.734314 | primary |
2022-05-21 08:44:18.734324 | primary | To fix this you could try to:
2022-05-21 08:44:18.734335 | primary | 1. loosen the range of package versions you've specified
2022-05-21 08:44:18.734347 | primary | 2. remove package versions to allow pip attempt to solve the dependency conflict
2022-05-21 08:44:18.734361 | primary |
2022-05-21 08:44:18.734373 | primary | ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
Rerunning here: https://review.rdoproject.org/r/c/testproject/+/26273
Analysis, 18th and 16th failed on same tempest test - 17th had different tempest failure - rerunning to confirm if issue is consistent
https://logserver.rdoproject.org/openstack-component-tripleo/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-tripleo-train/319b011/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz
https://logserver.rdoproject.org/73/26273/20/check/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-tripleo-train/1de3186/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz
https://logserver.rdoproject.org/openstack-component-tripleo/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-ovb-3ctlr_1comp-featureset001-tripleo-train/58fce8a/logs/undercloud/var/log/tempest/failing_tests.log.txt.gz
Rerunning failed jobs here
Master integration line
https://bugs.launchpad.net/tripleo/+bug/1971465 - fs001 and fs035 OVB jobs failing tempest - identity/haproxy connection errors
Master Component line
https://bugs.launchpad.net/tripleo/+bug/1973038 Master Node provisioning failing because nodes failing to boot and entering Emergency shell
https://bugs.launchpad.net/tripleo/+bug/1972163 Master cs9 standalone full tempest api failing for volume tempest tests and with error - 'Volume failed to detach from server'
Wallaby
https://bugs.launchpad.net/tripleo/+bug/1964940 Compute tests are failing with failed to reach ACTIVE status and task state "None" within the required time.
New bug:
ovb jobs failed in rerun here
Component which have failing jobs:-
Security and tripleo failing because of above known bug.
security failure looks wrong that mean we don't have proper isolication because we include delorean-current.
Random tempest issue is back
Master integration line:
Master component line - rerunning here
C9 wallaby integration line - 2022-05-17 - All green
c9 wallaby component line - rerunning here
C8 wallaby integration line - last_promotion=2022-05-16 - All green - No new hash when I checked this morning.
c8 wallaby component line - All green
Victoria integration line - last_promotion=2022-05-16 - All green - No new hash when I checked this morning.
Victoria component line - ALl greeen
Train integration line- last_promotion=2022-05-16 - All green - No new hash when I checked this morning.
Train component line - All green
OSP16.2 integration line - last_promotion=2022-05-16 - All green
OSP16.2 Component line - All green
OSP17/9 Integration line - In rerun
OSP17/9 Component line - All green
OSP17/8 Integration line - last_promotion=2022-05-17 - All green
OSP17/8 Component line - All green
Pinged jpdovin about that - We are rechecking https://review.opendev.org/c/openstack/tripleo-validations/+/841378 - to issue if issue is consistent.
Yatin found the issue, jiri is fixing it.
<ykarel> https://github.com/openstack/cliff/commit/584352dcd008d58c433136539b22a6ae9d6c45cc is in 2.18.0+
<ykarel> but in train we have 2.16.0
<ykarel> jpodivin, the issue is reproducable easily with tox
<ykarel> too
<ykarel> with cliff 2.16.0
<ykarel> also iirc it was said stable/1.6 branch was for train release, but still i see it's testing with master u-c
ykarel> shouldn't it be using stable/train
<jpodivin> it should be
<ykarel> ok please fix then it should catch atleast error like this
<ykarel> https://github.com/openstack/requirements/blob/stable/train/upper-constraints.txt#L285
That hash is out only fs001 and fs035 - can choose to promote it (7 days out)