owned this note
owned this note
Published
Linked with GitHub
# Ruck Rover 2022-10-14 - 2022-10-20
###### tags: `ruck_rover`
###### Next RR notes: https://hackmd.io/wtT4lbOSSeuLcRS2aPTQAQ
###### Previous RR notes: https://hackmd.io/J4_ZyTvITtS51Wvmd5feRw
##### ruck & rover: marios & dasm
[RDO Cockpit](http://dashboard-ci.tripleo.org/d/HkOLImOMk/upstream-and-rdo-promotions?orgId=1) / [RHOS Cockpit](http://tripleo-cockpit.lab4.eng.bos.redhat.com)
[RDO Promoter](http://promoter.rdoproject.org/promoter_logs/) / [RHOS Promoter](http://10.0.110.143/promoter_logs/)
[OpenStack Program Meeting 2022](
https://docs.engineering.redhat.com/pages/viewpage.action?spaceKey=PRODCHAIN&title=Meeting+notes)
Zuul Status:
* [opendev.org:openstack](https://zuul.opendev.org/t/openstack/status/)
* [rdoproject.org:rdoproject.org](https://review.rdoproject.org/zuul/status)
* [redhat.com:tripleo-ci-internal](https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/status)
## Active bugs
* https://bugs.launchpad.net/tripleo/+bug/1990480 - Tempest test test_create_update_port_with_dns_domain failure KeyError: 'dns_domain'
* https://bugs.launchpad.net/tripleo/+bug/1991093 - compute and network tempest tests failing on fs35 train
* https://bugs.launchpad.net/tripleo/+bug/1987092 - Pacemaker performance causes intermittent galera issues in loaded CI env
* https://bugs.launchpad.net/tripleo/+bug/1993262 - periodic-tripleo-ci-centos-9-ovb-3ctlr_1comp-featureset001-component-master-validation log pollution leads to intermittent failures
* https://bugs.launchpad.net/tripleo/+bug/1993730 - Wallaby c8 and c9 OVB jobs are failing the modify image step - mount point does not exist (steve left notes)
---
## Oct 21
### New/Transient/No bug yet:
#### d/stream
##### rhel8/16.2 - still hitting registry issues https://bugzilla.redhat.com/show_bug.cgi?id=2135432#c6 & rekicked manually the openstack-periodic-integration-rhos-16.2 1 currently running
##### centos8 components (ibm cloud) are stuck and holding component lines e.g. https://review.rdoproject.org/zuul/buildset/e41b7c8fbba142b0b0be4d5929ca6739 15 hours in progress
##### https://bugs.launchpad.net/tripleo/+bug/1984237 -> hitting check and also periodic integration https://review.rdoproject.org/zuul/build/e2c88a92218c4f1f98b4e03010d13b3f
---
## Oct 20
* **Upstream Integration**
* master: 2022-10-20
* tp: https://review.rdoproject.org/r/c/testproject/+/45352
* wallabyc9: 2022-10-19
* tp with image mount revert: https://review.rdoproject.org/r/c/testproject/+/45405
* wallaby c8: 2022-10-20
* revert temp skip: ~~https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/44769~~
* tp: https://review.rdoproject.org/r/c/testproject/+/45405
* train: 2022-10-17: 2022-10-20
* **Upstream Integration**
tp: https://review.rdoproject.org/r/c/testproject/+/45709
Run build-images.sh failing
* master components:
* wallabyc9 components:
* wallabyc8 components:
* train components:
* **Downstream**:
**Blocker:**
* https://bugzilla.redhat.com/show_bug.cgi?id=2135432 - containers build push time out
* https://redhat.service-now.com/help?id=rh_ticket&is_new_order=true&table=incident&sys_id=e14a38db872e999807c9ed3c8bbb35c8
* https://bugzilla.redhat.com/show_bug.cgi?id=2136053
* https://bugzilla.redhat.com/show_bug.cgi?id=2135616 - ovb issue
* https://redhat.service-now.com/help?id=rh_ticket&table=sc_req_item&sys_id=36aeadc78726d9d0d5cc642c8bbb3518&view=ess
* **Integration lines**:
* **rhos17 on rhel9**: **promoted 14-oct**
* **rhos17.1 on rhel9**: **promoted 12-oct**
* rerunning 3 failing OVB jobs
* **rhos17.1 on rhel8**: **promoted 12-oct**
* all passed except mixed rhel
* **rhos16.2**: **promoted 13-oct**
* all jobs passed expect baremetal - line kicked again now (may need to manually fix baremetal will check that tomorrow)
---
## Oct 19
* **Upstream Integration**
* master: 2022-10-14
* ~~All jobs **blocked**~~: ~~https://bugs.launchpad.net/tripleo/+bug/1993343~~
* tp: https://review.rdoproject.org/r/c/testproject/+/45352 **to watch**
* wallabyc9: 2022-10-19
* ~~All jobs **blocked**~~: ~~https://bugs.launchpad.net/tripleo/+bug/1993343~~
* tp: ~~https://code.engineering.redhat.com/gerrit/c/testproject/+/430001~~
* wallaby c8: 2022-10-16
* tp: https://review.rdoproject.org/r/c/testproject/+/45352
* fs001 failed tempest tests. If rerun doesnt have the same failed test we can skip and promote
* tp with different sets of tempest failure:
* https://review.rdoproject.org/zuul/build/6615dc954f3d40d39d3d6413728f2f2a
* https://review.rdoproject.org/zuul/build/2dbb016c110f46899ba0203597704ed6
* Skip and promote patch: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/45695
* train: 2022-10-17:
* tp: ~~https://review.rdoproject.org/r/c/testproject/+/45690~~
* tp: https://code.engineering.redhat.com/gerrit/c/testproject/+/430001 **to watch**
* tp: https://review.rdoproject.org/r/c/testproject/+/45701 **to watch**
* tp: https://code.engineering.redhat.com/gerrit/c/testproject/+/431902 **to watch**
* **Upstream Integration**
* master components:
* https://bugs.launchpad.net/tripleo/+bug/1993343
* wallabyc9 components:
* wallabyc8 components:
* train components:
* **Downstream**:
**Blocker:** https://bugzilla.redhat.com/show_bug.cgi?id=2135432#c3
* **Integration lines**:
* **rhos17 on rhel9**: **promoted 14-oct**
* **rhos17.1 on rhel9**: **promoted 12-oct**
* **rhos17.1 on rhel8**: **promoted 12-oct**
* **rhos16.2**: **promoted 13-oct**
pingd on rhos-ops:
~~~
<bhagyashris> Hi Team, we are still hitting retry limit issue and that is causing promtion blocker at downstream'
<bhagyashris> 2022-10-19 05:42:29.219305 | primary | "msg": "Failure downloading http://download.devel.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/rhos-release-latest.noarch.rpm, Request failed: <urlopen error [Errno -2] Name or service not known>",
<bhagyashris> fbo, wznoinsk|ruck ^
<bhagyashris> https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-component-cloudops/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-9-scenario002-standalone-cloudops-rhos-17.1/ab8ba26/job-output.txt
<dpawlik> cc kforde ^^
<dpawlik> do we have some network issues?
<dpawlik> only one outage topic is related to network: https://groups.google.com/u/0/a/redhat.com/g/outage-list/c/h8-ZkLuspxk
<dpawlik> and its not related
<dpawlik> bhagyashris: did you hold the node and check if its reachable?
<bhagyashris> dpawlik, currently in the running integration line job we are hitting this issue
<bhagyashris> 2022-10-18 18:03:17.050564 | TASK [get_hash : get md5 file]
<bhagyashris> 2022-10-18 18:03:37.609246 | primary | ERROR
<bhagyashris> 2022-10-18 18:03:37.609628 | primary | {
<bhagyashris> 2022-10-18 18:03:37.609678 | primary | "dest": "/home/zuul/workspace/delorean.repo.md5",
<bhagyashris> 2022-10-18 18:03:37.609705 | primary | "elapsed": 20,
<bhagyashris> 2022-10-18 18:03:37.609732 | primary | "msg": "Request failed: <urlopen error [Errno -2] Name or service not known>",
<bhagyashris> 2022-10-18 18:03:37.609781 | primary | "url": "https://osp-trunk.hosted.upshift.rdu2.redhat.com/rhel8-osp17-1/promoted-components/delorean.repo.md5"
<bhagyashris> 2022-10-18 18:03:37.609805 | primary | }
<bhagyashris> locally it's accessible "https://osp-trunk.hosted.upshift.rdu2.redhat.com/rhel8-osp17-1/promoted-components/delorean.repo.md5
<bhagyashris> not sure why it's causing issue on job node
* evallesp (~evallesp@10.39.194.108) has joined
<bhagyashris> dpawlik, added this job https://code.engineering.redhat.com/gerrit/c/testproject/+/431169/6/.zuul.yaml on node hold
<bhagyashris> https://sf.hosted.upshift.rdu2.redhat.com/zuul/t/tripleo-ci-internal/status/change/431169,6
<bhjf> Title: Zuul (at sf.hosted.upshift.rdu2.redhat.com)
<dpawlik> bhagyashris: on vexxhost we have partially same issue: on some host it can not reach trunk.rdoproject.org server
<dpawlik> they fix that, it was something wrong with the host
~~~
* **List of hashes that we can promote:**
* rhos16-2 on rhel8:
* ac7a781ab85cfc2c9b1a1b6aad4a50ab:
* Missing Jobs:
* periodic-tripleo-ci-rhel-8-bm_envD-3ctlr_1comp-featureset035-rhos-16.2
* periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset035-internal-rhos-16.2
* periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-internal-rhos-16.2
* periodic-tripleo-ci-rhel-8-ovb-1ctlr_2comp-featureset020-internal-rhos-16.2
* rhos17-1 on rhel9:
* 85b7a0a2481df9e73096a6bc88dc71f7
* Missing Jobs:
* periodic-tripleo-ci-rhel-9-ovb-3ctlr_1comp-featureset001-internal-rhos-17.1
* periodic-tripleo-ci-rhel-9-ovb-3ctlr_1comp-featureset035-internal-rhos-17.1
* periodic-tripleo-ci-rhel-9-ovb-1ctlr_2comp-featureset020-rbac-internal-rhos-17.1
* periodic-tripleo-ci-rhel-9-ovb-1ctlr_2comp-featureset020-internal-rhos-17.1
* **Component line**:
---
## Oct 18
* **Upstream Integration**
* master: **2022-10-13**
* tp: https://review.rdoproject.org/r/c/testproject/+/45352
* Again a new bug: https://bugs.launchpad.net/tripleo/+bug/1993343
* wallabyc9: **2022-10-13**
* tp: ~~https://review.rdoproject.org/r/c/testproject/+/45405~~
* wallaby c8:
* tp: https://review.rdoproject.org/r/c/testproject/+/45352
* train: 2022-10-13
* tp: https://review.rdoproject.org/r/c/testproject/+/45690
* **Upstream Integration**
* master components:
* wallabyc9 components:
* wallabyc8 components:
* train components:
* **Downstream**:
* **Integration lines**:
* **rhos17 on rhel9**: **promoted 14-oct**
* **rhos17.1 on rhel9**: **promoted 12-oct**
* containers build push job is failing: https://bugzilla.redhat.com/show_bug.cgi?id=2135432
* **rhos17.1 on rhel8**: **promoted 12-oct**
* **rhos16.2**: **promoted 12-oct**
* containers build push job is failing: https://bugzilla.redhat.com/show_bug.cgi?id=2135432
* ovb jobs are failing with RETY_LIMIT: https://bugzilla.redhat.com/show_bug.cgi?id=2135616
pinged on rhos-ops:
~~~
<bhagyashris> evallesp, wznoinsk|ruck hey currently we are facing this issue for ovb jobs https://bugzilla.redhat.com/show_bug.cgi?id=2135616
<bhagyashris> and this one https://bugzilla.redhat.com/show_bug.cgi?id=2135432 we hit on friday and yesterday on container build push job looks like it's intermittent but some how feeling like infra is not stable
<bhagyashris> and one more is "Could not resolve host: download.devel.redhat.com" is also coming intermittently
<bhagyashris> could you please check
<dpawlik> bhagyashris: did you check outage list
<dpawlik> if there are some DNS maintenance?
<apevec> for upshift registry, I pinged internal pnt infra gchat there where rlandy reported registry issues last week, no new replies yet
<evallesp> Yesterday I found some DNS errors as well... I though it was similar the internal SSO.
<apevec> bhagyashris (IRC): which nameservers do we have now in resolve.conf ?
<apevec> there's other thread in pnt-infra gchat about some nameservers not working
<apevec> > 10.11.142.1 seems to not work
<apevec> > These are the resolvers within RDU2 near RHOS-D:
<apevec> nameserver 10.11.5.160
<apevec> nameserver 10.11.5.19
<apevec> https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-component-clients/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-internal-clients-rhos-16.2/a2166e8/logs/hostvars-variables.yaml
<apevec> ansible_dns:
<apevec> nameservers:
<apevec> - 10.11.5.19
<apevec> - 10.5.30.45
<bhagyashris> https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-periodic-integration-rhos-16.2/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-scenario012-standalone-rhos-16.2/fe9fbf1/logs/undercloud/etc/resolv.conf
<apevec> nameserver 10.11.5.19
<apevec> nameserver 10.5.30.45
<apevec> ok so first one is what pnt-infra said, but what is the other one
<dpawlik> if someone is wondering why upstream zuul does not take any new request: "2022-10-18 07:29:32,336 DEBUG zuul.GithubRateLimitHandler: GitHub API rate limit (ansible-collections/community.digitalocean, 20166502) resource: core, remaining: 12500, reset: 1666081772"
<apevec> ah opendev doesn't get some free unlimited account?
<dpawlik> dunno
<dpawlik> I don't think they are using GH a lot
<dpawlik> just a mirror, most things are on opendev side
<apevec> bhagyashris (IRC): so in which tasks Failed to discover available identity versions happens, can you point to the code and how we can reproduce outside CI job?
<bhagyashris> apevec, here is the log https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-component-clients/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-internal-clients-rhos-16.2/a2166e8/job-output.txt
<bhagyashris> let me pass the taskwhere it failed
<bhagyashris> some where in ovb-manage: Create stack it failed
<apevec> is ovb-manage not producing more debug info?
<bhagyashris> https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/roles/ovb-manage/tasks/ovb-create-stack.yml#L43
<bhagyashris> https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-component-clients/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-internal-clients-rhos-16.2/a2166e8/logs/bmc-console.log
<bhagyashris> https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-component-clients/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-rhel-8-ovb-3ctlr_1comp-featureset001-internal-clients-rhos-16.2/a2166e8/logs/failed_ovb_stack.log
<marios> apevec: https://bugzilla.redhat.com/show_bug.cgi?id=2135616#c3 keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://rhos-d.infra.prod.upshift.rdu2.redhat.com:13000/v3/auth/tokens
<bhjf> Bug 2135616: urgent, unspecified, ---, ---, rhos-maint, distribution, NEW , Failed to discover available identity versions when contacting https://rhos-d.infra.prod.upshift.rdu2.redhat.com:13000/v3. Attempting to parse version from URL.
<apevec> bhagyashris (IRC): marios (IRC) https://rhos-d.infra.prod.upshift.rdu2.redhat.com:13000/v3/auth/tokens is reachable from my laptop on VPN, was it temp failure then, is it working now?
<apevec> if still failing, can we hold the node ?
<apevec> but not sure how we do that with an OVB node?
<apevec> this is failing on OC nodes?
<apevec> <bhagyashris> "https://sf.hosted.upshift.rdu2...." <- hmm in this case cloud-init failed b/c > [ 224.941268] cloud-init[1292]: Failed to start openstack-bmc-baremetal-81610_3.service: Unit not found.
<apevec> marios (IRC): which machine's console is what we see bmc-console.log ? It's CentOS 7 ??
<apevec> CentOS Linux 7 (Core)
<apevec> Kernel 3.10.0-1127.10.1.el7.x86_64 on an x86_64
<apevec> and using public centos mirrors: bmc-81610 login: [ 54.231122] cloud-init[1292]: * base: centos.mirrors.hoobly.com
<apevec> [ 54.232969] cloud-init[1292]: * centos-ceph-nautilus: mirror.steadfastnet.com
<apevec> [ 54.233245] cloud-init[1292]: * centos-nfs-ganesha28: mirror.siena.edu
<apevec> [ 54.234583] cloud-init[1292]: * centos-openstack-stein: centos.hivelocity.net
<apevec> [ 54.235488] cloud-init[1292]: * centos-qemu-ev: mirror.umd.edu
<apevec> [ 54.236472] cloud-init[1292]: * epel: forksystems.mm.fcix.net
<apevec> [ 54.238592] cloud-init[1292]: * extras: mirror.umd.edu
<apevec> [ 54.239339] cloud-init[1292]: * updates: mirror.datto.com
<apevec> then using https://trunk.rdoproject.org/centos7/current/
<apevec> after this keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://rhos-d.infra.prod.upshift.rdu2.redhat.com:13000/v3/auth/tokens: ('Connection aborted.', error(104, 'Connection reset by peer'))
<apevec> it continues like error didn't happen, should probably stop, are those systemd unit files generated on the fly?
<apevec> [ 224.790606] cloud-init[1292]: + systemctl daemon-reload
<apevec> [ 224.887689] cloud-init[1292]: + systemctl enable config-bmc-ips
<apevec> [ 224.901780] cloud-init[1292]: Failed to execute operation: No such file or directory
<apevec> [ 224.902855] cloud-init[1292]: + systemctl start config-bmc-ips
<apevec> [ 224.907713] cloud-init[1292]: Failed to start config-bmc-ips.service: Unit not found.
<marios|call> apevec: yeah the bmc is still in c7
<apevec> sigh
<apevec> that's unsupported ;)
<apevec> I mean really, OSC must be old, also it should retry few times
<apevec> https://trunk.rdoproject.org/centos7/current/ is 2020-04-13
<apevec> in any case, bhagyashris (IRC) do we still see that failure or is intermittent ?
<apevec> * in any case, bhagyashris (IRC) do we still see that failure or is it random ?
<apevec> I still don't have a clear case to report to PSI ops
<apevec> before I start looking deeper into OVB code, is stable/2.0 the branch currently in use, based on C7 ?
<apevec> and new dev is in master, based on CS9 ?
<apevec> (while at it, what are the current blockers to move OVB to CS9 ?)
~~~
* **Component line**:
---
## Oct 17
* master: **2022-10-13**
* tp: https://review.rdoproject.org/r/c/testproject/+/45352
* to check: fs020
* fs35:
* /usr/share/openstack-tripleo-heat-templates/ci/environments/ovb-ha.yaml--disable-protected-resource-types file not found
* probably due to a typo?
* Bug: https://bugs.launchpad.net/tripleo/+bug/1993139
* Fix WIP: https://review.opendev.org/c/openstack/tripleo-quickstart/+/861590
* wallabyc9: **2022-10-13**
* tp: https://code.engineering.redhat.com/gerrit/c/testproject/+/430001
* tp: https://review.rdoproject.org/r/c/testproject/+/45405
* train: 2022-10-13
* tp: https://review.rdoproject.org/r/c/testproject/+/45407
* fs35 tempest failures
* **Downstream**:
* Retry_limit : https://osp-trunk.hosted.upshift.rdu2.redhat.com/ or https://bootstrap.pypa.io/pip/3.6/get-pip.py or https://docker-registry.upshift.redhat.com/v2/tripleorhos-17-1-rhel-8/openstack-heat-base/blobs/sha256:3574ac17976440865039b44e5bfd58ca7e504d41eb5f357e4623b234e54b9148 is not reachable
* pinged fbo on rhos-ops
~~~
<bhagyashris> Hi Team we are currently facing retry_limit issue on most of the jobs at downstream due to ("msg": "Status code was -1 and not [200]: Request failed: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897)>",)
<bhagyashris> fbo ^ https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-periodic-integration-rhos-16.2/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-rhel-8-rhos-16.2-promote-promoted-components-to-tripleo-ci-testing-internal/15ae3df/job-output.txt
<fbo> bhagyashris (IRC) That's probably fixed already
<bhagyashris> fbo, we recently hit with same issue https://code.engineering.redhat.com/gerrit/c/testproject/+/431140/4#message-3fcb221e2c8e36f2c63be255c0b0c7160c43edeb
<bhagyashris> sorry let me share new https://sf.hosted.upshift.rdu2.redhat.com/logs/openstack-periodic-integration-rhos-17.1-rhel9/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-rhel-9-rhos-17.1-promote-promoted-components-to-tripleo-ci-testing-internal/a203734/job-output.txt
<bhagyashris> ignor first link
<bhagyashris> https://code.engineering.redhat.com/gerrit/c/testproject/+/431140/4#message-3732b5c2fbf3f9b548471cd036d6c5e69aa69166
<fbo> it was yesterday
<evallesp> I've updated the certs for osp-trunk prod environment. Now we are getting autorenew certs by: https://gitlab.cee.redhat.com/ansible-playbooks/idm-client-playbooks/-/blob/main/inventory/group_vars/osp-dlrn.yml So I am going to remove its certs deployment by sf-infra ansible-playbook (as we are not going to renew them any more).
<bhagyashris> fbo, https://sf.hosted.upshift.rdu2.redhat.com/logs/40/431140/4/check/periodic-tripleo-rhel-8-rhos-16.2-promote-promoted-components-to-tripleo-ci-testing-internal/a0873e2/job-output.txt
<fbo> curl https://osp-trunk.hosted.upshift.rdu2.redhat.com/ works now
<bhagyashris> fbo, ok let me recheck again
~~~
* **Integration lines**:
* **rhos17 on rhel9**: **promoted 14-oct**
* **rhos17.1 on rhel9**: **promoted 12-oct**
* containers build push job is failing: https://bugzilla.redhat.com/show_bug.cgi?id=2135432
* **rhos17.1 on rhel8**: **promoted 12-oct**
* **rhos16.2**: **promoted 12-oct**
* containers build push job is failing: https://bugzilla.redhat.com/show_bug.cgi?id=2135432
* **Component line**:
---
## Oct 14
* **Upstream Integration**
* master: **2022-10-13**
* tp: ~~https://review.rdoproject.org/r/c/testproject/+/45352~~
* wallabyc9: **2022-10-13**
* tp: https://review.rdoproject.org/r/c/testproject/+/45405
* train: 2022-10-13
* tp: https://review.rdoproject.org/r/c/testproject/+/45407
* **Upstream Integration**
* master components:
* ~~https://review.rdoproject.org/r/c/testproject/+/45657~~
* wallabyc9 components:
* ~~tp: https://review.rdoproject.org/r/c/testproject/+/45420~~
* wallabyc8 components:
* tp: https://review.rdoproject.org/r/c/testproject/+/45659
* train components:
* ~~https://review.rdoproject.org/r/c/testproject/+/45660~~
* **Downstream**:
* Retry_limit : https://osp-trunk.hosted.upshift.rdu2.redhat.com/ or https://bootstrap.pypa.io/pip/3.6/get-pip.py or https://docker-registry.upshift.redhat.com/v2/tripleorhos-17-1-rhel-8/openstack-heat-base/blobs/sha256:3574ac17976440865039b44e5bfd58ca7e504d41eb5f357e4623b234e54b9148 is not reachable
* **Integration lines**:
* **rhos17 on rhel9**: **promoted 09-oct**
* fs001, containers-multinode and bm-fs001 job is failing re-running those jobs here: https://code.engineering.redhat.com/gerrit/c/testproject/+/429803
* fs001 and containers multinode job passed.
* now bm-fs001-envB job is still failing re-running that job here: https://code.engineering.redhat.com/gerrit/c/testproject/+/431169 (looks like issue is due to outage)
* **rhos17.1 on rhel9**: **promoted 12-oct**
* **rhos17.1 on rhel8**: **promoted 12-oct**
* Retry : https://bootstrap.pypa.io/pip/3.6/get-pip.py not reachable
* containers build failing re-running here: https://code.engineering.redhat.com/gerrit/c/testproject/+/431140
* we are still hitting time out issue
~~~
2022-10-14 07:16:56.005258 | primary | Head "https://docker-registry.upshift.redhat.com/v2/tripleorhos-17-1-rhel-8/openstack-heat-base/blobs/sha256:3574ac17976440865039b44e5bfd58ca7e504d41eb5f357e4623b234e54b9148": dial tcp: lookup docker-registry.upshift.redhat.com on 10.5.30.45:53: read udp 192.168.200.26:44983->10.5.30.45:53: i/o timeout
~~~
* in the second re-run it failed with 503 so again it's like outage issue so will need to keep eye on https://code.engineering.redhat.com/gerrit/c/testproject/+/429803
* **rhos16.2**: **promoted 12-oct**
* **Component line**:
* check all the component line jobs for all the above releases and hit the testproject patch for failing jobs : https://code.engineering.redhat.com/gerrit/c/testproject/+/431159
* All the failing jobs passed here: https://code.engineering.redhat.com/gerrit/c/testproject/+/431159 except the container-multinode-client-rhso16.2 job so re-running that job here: https://code.engineering.redhat.com/gerrit/c/testproject/+/429803
---