# Current Live updated schedule for PHX2 -> IAD2 move ## Acronyms Used in Document MVFE - Minimal Viable Fedora Environment PHX2 - Chandler Arizona datacentre RDU-CC - Morrisville NC datacentre community cage IAD2 - Ashburn Equinix datacentre AAA - Authentication, Authorization, Auditing IPA - Identity, Policy, Audit FAS - Fedora Account System (older AAA solution) FreeIPA - Newer account system ### Shipment Equipment The following lists cover the hardware which is being shipped at specific dates and where the hardware is going. #### June 15th Fedora to IAD **note:** it looks like it will take 2 to 3 days to unrack the systems in PHX2. it will also take 1 week for the hardware to travel across the country. | Serial No | Model | Current Hostname | Current Rack | New Rack | New Hostname | | --------- | ----- | ---------------- | ------------ | -------- | ------------ | | ??? | Ampere | ??? | Rack 150 | 101 | ??? | | ??? | Ampere | ??? | Rack 147 | 101 | ??? | | ??? | Dell R630 | fed-cloud13 | Rack 157 | 101 | ??? | | ??? | Dell R430 | sign02 | Rack 148 | 01 | ??? | | ??? | Dell R430 | bkernel04 | Rack 150 | 101 | ??? | | ??? | Mustang | 3PBC-A0000Z | Rack 146 | 101 | ??? | | ??? | Mustang | 3PBD-A0004P | Rack 146 | 101 | ??? | | ??? | IBM Power 9 | ??? | Rack 147 | 101 | ??? | | ??? | IBM Power 8 | ??? | Rack 147 | 101 | ??? | | ??? | Dell R630 | virthost02.stg | Rack 153 | 102 | ??? | | ??? | Dell R630 | virthost01.stg | Rack 153 | 102 | ??? | | ??? | Dell R640 | virthost05 | Rack 154 | 102 | ??? | | ??? | Dell R640 | virthost04 | Rack 154 | 102 | ??? | | ??? | Dell R630 | virthost06 | Rack 153 | 102 | ??? | | ??? | Dell R630 | virthost02 | Rack 153 | 102 | ??? | | ??? | Dell R430 | sign01 | Rack 148 | 102 | ??? | | ??? | Dell R630 | backup01 | Rack 155 | 102 | ??? | | ??? | Dell R430 | sign01 | Rack 148 | 102 | ??? | | ??? | Dell R730 | virthost01 | Rack 153 | 102 | ??? | | ??? | Dell R630 | virthost12 | Rack 155 | 102 | ??? | | ??? | Dell R630 | virthost22 | Rack 154 | 102 | ??? | | ??? | Dell R630 | fed-cloud15 | Rack 157 | 102 | ??? | | ??? | Dell R630 | bvirthost01 | Rack 148 | 102 | ??? | | ??? | Dell R630 | bvirthost04 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirhost05 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirthost13 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirthost14 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirthost15 | Rack 149 | 102 | ??? | | ??? | Dell R430 | bkernel03 | Rack 150 | 102 | ??? | | ??? | Dell R630 | virthost14 | Rack 154 | 102 | ??? | | ??? | Dell R630 | virthost19 | Rack 154 | 102 | ??? | | ??? | Dell R630 | virthost21 | Rack 154 | 102 | ??? | | ??? | Dell R630 | autosign | Rack 149 | 102 | ??? | | ??? | Dell FX2 | Builders FX | Rack 150 | 102 | ??? | | ??? | IBM Power 9 | ??? | Rack 152 | 102 | ??? | | ??? | Ibm Power 8 | ??? | Rack 147 | 102 | ??? | | ??? | Ibm Power 9 | ??? | Rack 147 | 103 | ??? | | ??? | Ampere | ??? | Rack 147 | 103 | ??? | | ??? | Ampere | ??? | Rack 157 | 103 | ??? | | ??? | Ampere | ??? | Rack 149 | 103 | ??? | | ??? | Cavium | ??? | Rack 146 | 103 | ??? | | ??? | Ampere | ??? | Rack 147 | 103 | ??? | | ??? | Ampere | ??? | Rack 147 | 103 | ??? | | ?RT | Ampere | ?RT | Rack 147 | 103 | ??? | | ?LT | Ampere | ?LT | Rack 149 | 103 | ??? | | ?RX | Ampere | ?RX | Rack 149 | 103 | ??? | | ?NG | Ampere | ?NG | Rack 150 | 105 | ??? | | ?LV | Ampere | ?LV | Rack 150 | 105 | ??? | | ?NT | | ?NT | Rack 150 | 105 | ??? | | ?RH | Ampere | ?RH | Rack 150 | 104 | ??? | | ??? | Dell R630 | virthost04.stg | Rack 154 | 104 | ??? | | ??? | Dell R630 | virthost03.stg | Rack 153 | 104 | ??? | | ?V2 | Ampere | ?V2 | Rack 149 | 104 | ??? | | ??? | Dell R630 | vhost-comm01 | Rack 152 | 104 | ??? | | ??? | Dell R640 | qa01 | Rack 151 | 104 | ??? | | ?MN | Ampere | ?MN | Rack 147 | 104 | ??? | | ?P4 | Ampere | ?P4 | Rack 147 | 104 | ??? | | ?R6 | Ampere | ?R6 | Rack 147 | 104 | ??? | | ??? | Ampere | ??? | Rack 150 | 10104 | ??? | | ??? | Dell FX2 | Builders FX | Rack 148 | 104 | ??? | | ??? | IBM Power 9 | ??? | Rack 152 | 104 | ??? | | ??? | IBM Power 8 | ??? | Rack 147 | 104 | ??? | | ??? | OpenGear | Serial | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A00053 | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0009P | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0009B | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0002B | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0008J | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0006W | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0004G | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0007T | Rack 146 | 105 | ??? | | ??? | IBM Power 8 | ??? | Rack 147 | 105 | ??? | | ??? | IBM Power 8 | ??? | Rack 152 | 105 | ??? | ## Phase 1 ### Week 00 (2020-03-02 -> 2020-03-08) - [x] Hardware shipping needs to be planned out - [x] Rack layouts for IAD2 and RDU-CC need to be finalized - [x] Work with RH IT on what they need in network diagrams ### Week 01 (2020-03-09 -> 2020-03-15) - [x] Get RHEL-8 virthost built - [x] Deliver network diagrams to RHIT - [x] Announce at downtime for communishift - [x] ~~Try to find a place for communishift to temp run? - no~~ ### Week 02 (2020-03-16 -> 2020-03-22) - [x] 2020-03-17 Fedora 32 Beta Date ### Week 03 (2020-03-23 -> 2020-03-29) - [x] Re-Announce downtime for communishift - [x] Get updated tasks and timeline set out - [x] Work out COVID-19 Contingencies ### Week 04 (2020-03-30 -> 2020-04-05) - [x] Virthost03 skunkworks (kevin) - [x] RHEL-8 install - [x] test encryption over bridge - [x] test TEAM network - [x] Virthost05 skunkworks (kevin) - [x] uefi EL8 install instructions - [x] secure boot. - [x] Re-Announce downtime for communishift - [x] Set up noggin instance in AWS to replace communishift (kevin) ### Week 05 (2020-04-06 -> 2020-04-12) - [x] Create DNS templates for IAD2 - [x] Create template DHCP for mgmt hosts - [x] Add more items to ship to IAD2 - [x] Get network layout for RDU-CC finalized - [x] public ip address count and space needed - [x] mgmt network 172.23.1.??/24 - [x] private network for openshift backnodes 172.23.2.??/24 - [x] Collect all mgmt mac addresses - [x] Collect all hardware mac addresses for debugging - [x] Set mgmt interfaces to DHCP before getting shipped - [x] opengear - [x] amperes - [ ] power hardware - [x] 2020-04-09 Fedora 32 Final Freeze - [x] Power needs to be ready in RDU-CC cage - [x] Option A/B meeting for Phase 2 - [x] Option A - We ship items and continue with A - [ ] Option B - We go with planning out move of items within PHX2 and look at the larger move to IAD2 next year ## Phase 2 ### Week 06 (2020-04-13 -> 2020-04-19) - [x] Put move details into status.fedoraproject.org so people can see what is going on. - [x] Take down and ship communishift hardware to RDU-CC - [x] Take down and ship extra hardware to IAD2 - [x] Begin takedown of communishift hardware - [x] Communishift 13th April - 1st May - [x] Power off systems in racks - [x] Work with logistics for pack and move - [x] Power should be on by 17th - this is power to RDU-CC ### Week 07 (2020-04-20 -> 2020-04-26) - [x] Hardware should arrive at RDU-CC - [x] Rerack from 27th 1st may - [x] Hardware should arrive at IAD2 - [x] work with Shauns team to get systems racked/stacked in 101 ### Week 08 (2020-04-27 -> 2020-05-03) - [x] Work out temporary root password for installs - [x] 2020-04-28 Fedora 32 release - ~~[ ] set ip address in mac~~ - ~~[ ] set up admin user~~ - ~~[ ] set ipmi and serial over lan access~~ ### Week 09 (2020-05-04 -> 2020-05-10) - [x] Write howto on Dell mgmt setup (smooge) - [x] IAD2 work (see IAD2 bootstrap) [x] find all expected hardware [x] Install new hardware ### Week 10 (2020-05-11 -> 2020-05-17) - [x] IAD2 work (see IAD2 bootstrap) ### Week 11 (2020-05-18 -> 2020-05-24) - [x] IAD2 work (see IAD2 bootstrap) - [ ] RDU Bootstrap - [ ] Do any items in RDU-CC that time allows - [ ] Work with IT on any network layout issues left for RDU2 site. - [ ] communishift proxies with private + external interfaces - [x] Give internal IT new DNS servers ip address - [x] Give internal IT new SMTP server ip address - [ ] Move final virthost-cc boxes into new RDU-CC racks ### Week 12 (2020-05-25 -> 2020-05-31) - [x] IAD2 work (see IAD2 bootstrap) - [ ] Rack 103 must be up. Get openqa and other systems in. ## Phase 4 ### Week 13 (2020-06-01 -> 2020-06-07) - [ ] Final checklist and approval of IAD2 MVFE - [ ] Test email routing through IAD2 proxies - [ ] Test www proxies - [ ] Test builds - [ ] Test openvpn - [ ] Test rsync - [ ] Test route to s390x - [ ] Test route to bugzilla STOMP message bus - [ ] make sure nagios says everything is green - [ ] Bring up openqa in IAD2 - [ ] Change Fedora DNS to shorter times for major change - [ ] GO/NO-GO meeting: - [ ] Option A: Ship everything left in PHX2 to IAD2 on June 15th - [ ] Option B: Look at internal move of PHX2 equipment #### Fedora Datacenter Move week (and week before) - detailed tasks #### Notes: * db switcharoo means: take service down in phx2, dump db in phx2, copy dump to iad2, load dump in iad2, bring up service in iad2, test, switch dns. * snapmirror switcharoo means: take service down in phx2, umount all users of volume, sync final data via snapmirror, break snapmirror and make rw in iad2. openqa - has to be done in sync with koji or be down for some time. - [ ] openqa - check on timing - [ ] db switcharoo: openqa - [ ] snapmirror switcharoo fedora_prod_openqa oustanding questions: - Did we miss any applications/services? - When do we want to move openqa? It's pretty self contained. #### 2020-06-01 - monday Testing day [Fedora Datacenter Test Plan ](/op6N_nIaR7aMzw9Ib-sDAQ) #### 2020-06-02 - tuesday Penaultimate Testing day [Fedora Datacenter Test Plan ](/op6N_nIaR7aMzw9Ib-sDAQ) #### 2020-06-03 - wed Last testing day - [x] 16:00 UTC: move dl.fedoraproject.org over to iad2 - [x] change dns - [x] disable rsync/httpd on phx2 download servers - [ ] destroy all phx2 download vm's. - [x] send note to mirror-admins list in case someone finds problems - [x] Test everything we can with a local /etc/hosts file and vpn pointing to iad bastions. - [x] 21:00 UTC: Add all iad2 vpn hosts to vpn.fedoraproject.org dns with a -iad2 to the name and in the 192.168.20.x net. #### 2020-06-04 - thursday Batcave and wiki day ( testing after this point will be difficult as vm's are connected to the phx2 vpn) - [ ] 15:00 UTC(start) switch to using batcave01.iad2 - [ ] shutdown access to batcave01.phx2 - [ ] final sync of all data - [ ] /home - [ ] /git - [ ] /srv/* --exclude pub - [ ] /root - [ ] snapmirror switcharoo: fedora_app WARNING: builds need this for rhel things, no epel builds while it's changing, wiki needs this for attachements - [ ] remount old volume in phx2 on kojipkgs*.phx2 to keep epel builds working - [ ] test ansible-playbook runs from batcave01.iad2 - [ ] change dns for infrastructure.fedoraproject.org to batcave01.iad2 - [ ] power off batcave01.phx2 vm - [ ] have all iad2 vpn hosts reconnect to bastion01.phx2 so they can take traffic as we switch applications over to them. - [ ] change the "IAD2" view of the fedoraproject.org zone to use PHX2/external things for rabbitmq/etc. ie, where we have if IAD2, change to match PHX2 if This allows iad2 services to connect to phx2 ones until we move those. - [ ] set all dns zones to 5m ttl - [ ] wiki (needs fedora_app for attachments) - [ ] take wiki*phx2 vms down - [ ] db switcharoo: db03/fpo-mediawiki (note this is mariadb) - [ ] upgrade run (wiki01.iad2 is newer than wiki01.phx2) - [ ] test via local curl and the like - [ ] change haproxy to use wiki01-iad2 (so in proxies -> bastion01.phx2 -> wiki01.iad2) - [ ] grobbisplitter - move to batcave01.iad2 #### 2020-06-05 - friday Zodbot / meeting day - [ ] migrate value01 / zodbot - [ ] pick time with no meetings planned - [ ] stop zodbot/httpd on value01.phx2 - [ ] rsync data from value01.phx2 to value01.iad2 - [ ] bring up zodbot / fedmsg-irc from value01.iad2 - [ ] change valu01 in proxies to value01-iad2 #### 2020-06-06 - saturday hope for a quiet day catching up on our sleep. #### 2020-06-07 - sunday - [ ] disable rawhide composes after sundays completes - [ ] confirm no new packages will be added, no unretirements processed. - [ ] disable backups on backup01.phx2 - [ ] can we bump cloudfront to cache ostree stuff for a long time so when we move the ostree volume it still works? ### Week 14 (2020-06-08 -> 2020-06-14) - [ ] Move logical infrastructure to IAD2 MVFE #### 2020-06-08 - monday Fedora messaging, PDC, mirrormanager and authentication day - [ ] 15:00 UTC: *.stg.fedoraproject.org shut down and commented in ansible inventory. - [ ] 15:00 UTC: *.stg.fedoraproject.org virthosts reconfigured for iad2 and powered off. - [ ] fedora-messaging/fedmsg buses - [ ] shutdown phx2 cluster - [ ] change rabbitmq.fedoraproject.org in haproxy to point to rabbitmq01-iad/02-iad/03-iad - [ ] make sure openshift iad2 messaging-bridges are working - [ ] github2fedmsg - [ ] bugzilla2fedmsg - [ ] migrate notifs-backend01/notifs-web01 vms - [ ] shut down vms - [ ] copy storage and libvirt xml over - [ ] db switcharoo: db01/notifs - [ ] bring up and reconfigure for new network - [ ] migrate pdc instances !WARNING: commits will fail while pdc is down, do this fast - [ ] stop services in phx2 - [ ] dump database / restore database - [ ] bring up services in iad2 and test - [ ] switch haproxy to point to pdc-web01-iad2 - [ ] mirrormanager - [ ] shut down vm's in phx2 - [ ] db switcharoo: db01/mirrormanager2 - [ ] start things in iad2 - [ ] test - [ ] point service in haproxy to iad2 - [ ] authentication stack - [ ] stop services in phx2: fas, ipsilon, ipa - [x] bring up fas in iad2 openshift - [x] bring up ipsilon in iad2 openshift - [ ] bring up ipa vm's in iad2 - [ ] db switcharoo db-fas01/* - [ ] switch httpd proxy config to point to -iad2 versions - [ ] disable ipa replication to/from ipa01.phx2 and ipa02.phx2 (can be done at any time, just power off phx2 IPA boxes for now) - [ ] power off misc hardware that is not being shipped. - [ ] kernel02 - [ ] others? #### 2020-06-09 - tuesday Buildsystem and it's friends day: - [ ] 00:01 UTC: disable bodhi push after it runs - [ ] 15:00 UTC: disable all builders and show koji offline message. - [ ] 15:00 UTC: cancel all builds still in progress - [ ] 15:00 UTC: disable all builders. - [ ] 15:00 UTC: build system goes down. (src/koji/kojipkgs/bodhi/master mirrors/registry/builders) - [ ] 15:00 UTC: begin netapp transition for: - [ ] fedora_koji - [ ] fedora_koji_archive* - [ ] fedora_odcs - [ ] fedora_ostree_content - [ ] fedora_ftp - [ ] fedora_sourcecache - [ ] oci-registry - [ ] db-koji01 database - [ ] start pg_dumpall on db-koji01.phx2 time: - [ ] transfer to db-koji01.iad2 time: - [ ] load pg_dump into db-koji01.iad2 time: - [ ] rsync of pkgs02.phx2 git data to pkgs01.iad2 time: - [ ] bodhi database - [ ] dump bodhi2 db on db01.phx2 time: - [ ] transfer db dump to db01.iad2 time: - [ ] load dump into db01.iad2 time: - [ ] resultsdb database - [ ] dump resultsdb01 db on db-qa01.qa time: - [ ] transfer db dump to db01.iad2 time: - [ ] load dump into db01.iad2 time: - [ ] fedora_ftp cut over: mount ro on downloads and rw on composers/bodhi - [ ] oci_registry cut over: mount as needed, switch dns to iad2. - [ ] fedora_koji_archive* cut over: mount needed hosts in iad2. - [ ] repoint s390x builders to new koji hub. - [ ] db-koji01.iad2 is loaded and fedora_koji is ready, bring up koji for testing. - [ ] run 1.21.0 koji migrations on db - [ ] run sql modify script to clean things up - [ ] add all new iad2 builders hosts to the db - [ ] adjust hub config for channels and adjust builders for channels. - [ ] set hub to LockOut = true and test with some admin commands - [ ] unset lockout - [ ] fedora_sourcecache is ready and pkgs sync is done bring up pagure on pkgs for testing. - [ ] bodhi db dump/restore done and fedora_ftp done and koji done, bring up bodhi for testing. - [ ] bodhi openshift pods bring up - [ ] coreos* openshift pods bring up - [ ] fedora-ostree-pruner - [ ] compose-tracker - [ ] odcs db dump/restore done and fedora_ftp done and koji done, bring up odcs for testing. - [ ] osbs db dump/restore done and fedora_ftp done and koji done, bring up osbs for testing. - [ ] mbs db dump/restore done and fedora_ftp done and koji done, bring up mbs for testing. - [ ] test koji build - [ ] test signing - [ ] test bodhi updates push - [ ] switch proxy httpd balancer to point to koji to iad2 - [ ] switch proxy httpd balancer to point to dns for pkgs/src to iad2 - [ ] switch proxy httpd balancer to point to bodhi to iad2 - [ ] fire off a rawhide compose on compose-rawhide01.iad2 (keep cron off) #### 2020-06-10 - wed Openshift apps, mailman/lists and datagrepper/datanommer day - [ ] 16:00 UTC backups - [ ] switcharoo fedora_backups volume - [ ] openshift apps - [ ] asknot - [ ] distgit-bugzilla-sync - [ ] greenwave - [ ] koschei (scheduling off) - [ ] mdapi - [ ] switcharoo fedora_prod_mdapi - [ ] message-tagging-service - [ ] monitor-gating - [ ] release-monitoring - [ ] the-new-hotness - [ ] waiverdb - [ ] mailman/lists - [ ] take down mailman01.phx2 - [ ] Migrate it and all data to iad2 - [ ] move pointer to iad2. - [ ] datagrepper / datanommer - [ ] take down service in phx2. - [ ] dump db and reload in iad2 - [ ] switch dns to iad2 service. #### 2020-06-11 - thursday Website builders, blockerbugs, and elections day: - [ ] docsbuilding - [ ] switch fedora_prod_docs - [ ] websites building - [ ] switch fedora_prod_websites - [ ] reviewstats - [ ] switch fedora_prod_reviewstats - [ ] fedimg - [ ] blockerbugs - [ ] dump/restore blockerbugs db - [ ] bring up blockerbugs for testing in iad2. - [ ] switch blockerbugs dns to iad2. - [ ] kerneltest vm/app - [ ] dump/restore db - [ ] bring up in iad2 and test - [ ] switch in vpn/dns - [ ] 23:59:59 UTC: elections - [ ] wait until elections are over - [ ] stop app in phx2, sync db and bring up in iad2. - [ ] if all looks well, switch all vpn hosts to point to bastion01.iad2 - [ ] shutdown bastion hosts in phx2. #### 2020-06-12 - friday Stomp out fires day: - [ ] fix bugs / issues as found - [ ] process new packages - [ ] allow retirements - [ ] make sure everything in phx2 is off and ready to ship next week - [ ] test and re-enable backups on backup01.iad2. ### Week 15 (2020-06-15 -> 2020-06-21) - [ ] Shutdown of PHX2 racks - [ ] Removal of systems from PHX2 and shipment to IAD2 - [ ] Travel of equipment to IAD2 - [ ] PHX2 -- go over remaining hardware to recycle ### Week 16 (2020-06-22 -> 2020-06-28) - [ ] Most likely time for hardware arrival - [ ] Racking and stacking of IAD2 equipment - [ ] Set up mgmt interfaces - [ ] Do initial hardware installs to RHEL8 ### Week 17 (2020-06-29 -> 2020-07-05) - [ ] Finish initial hardware installs - [ ] Bring up additional builders ### Week 18 (2020-07-06 -> 2020-07-12) - [ ] Bring up additional services - [ ] Move mgmt interfaces back to static ### Week 19 (2020-07-13 -> 2020-07-19) - [ ] Probably more work at data centre ### Week 20 (2020-07-20 -> 2020-07-26) - [ ] Sign off on work completed ### Week 21 (2020-07-27 -> 2020-08-02) - [ ] A miracle occurs *Several in fact* ### Week 22 (2020-08-03 -> 2020-08-09) ??? - [ ] Mass Rebuild for Fedora 33 starts - [ ] All systems must be up and running - [ ] Production needs to be normal - [ ] FlockToFedora 2020 (now virtual) - [ ] profit HISTORICAL SECTION BELOW HERE ## Order of Bootstrapping in RDU-CC 1. [x] Get power setup for racks 2. [x] Get items shipped from PHX2 1. [x] Racks 2. [x] Systems 3. [x] Get systems installed 1. [x] Racks 2. [x] Systems 3. [x] Inventory systems and confirm 4. [x] Write up wire spreadsheet for networks 4. [x] SPIKE: Get switches wired into master router. 1. [x] ~~ex3400 to mgmt vlan~~ 2. [x] ex4300 ports 1-24 to production vlan 3. [x] ex4300 ports 25-36 to mgmt vlan 4. [x] ex4300 ports 37-48 to storage vlan 5. [x] ex5??? 10 gig ports to prod vlan 5. [x] Prepare bootstrap 1. [x] make RHEL-8.2 usb stick 2. [x] make CentOS-8.1 usb stick 3. [x] get mask and gloves for datacenter visit 6. [ ] SPIKE: Build out bastion server (old vhost-s390) 1. [x] wire idrac to mgmt vlan 2. [x] configure idrac with ip address 3. [x] wire eth0 to prod network 4. [x] wire eth1 to mgmt vlan 5. [x] install RHEL-8.1 onto hardware 6. [ ] build a openvpn network 7. [ ] SPIKE: Mgmt interfaces 1. [x] wire mgmt to top switch. 2. [ ] power on hardware 3. [ ] go into bios and configure the ip address 4. [ ] test that mgmt is reachable from bastion-rdu-cc 8. [ ] SPIKE: Bring up vger 1. [ ] wire hardware to front switch 2. [ ] see if hardware works. 3. [ ] give mgmt login to 4. [ ] call in hardware repairs 9. [ ] SPIKE: Bring up retrace 1. [ ] wire hardware into top and back switch 2. [ ] see if hardware works 3. [ ] log in via physical console and configure ip address 4. [ ] test and fix ansible 10. [ ] Bring up storinator01 up 1. [ ] log into console 2. [ ] change ip addresses to proper ones 3. [ ] test storage and data 11. [ ] Bring up vmhost-rdu-cc-05 up 1. [x] Reinstall hardware with RHEL-8.1 2. [ ] Start deploying guests as needed 12. [ ] Move over vmhost-rdu-cc-01 -> vmhost-rdu-cc-04 from other racks to A06 1. [ ] Connect mgmt to mgmt network 2. [ ] Give mgmt ip address via BIOS 3. [ ] Connect eth0 to external network 4. [ ] Connect eth1 to internal network 5. [ ] Configure host to have br1 ip address 13. [ ] Build noc03 for rdu03 1. [ ] Configure out a dhcp on eth1 2. [ ] Configure out a tftp 3. [ ] Mirror rhel8 and openshift bits 14. [ ] OpenShift 4.x Install 1. [ ] Bring up proxy front ends 2. [ ] Bring up etcd systems 3. [ ] Begin install of dell fx systems 4. [ ] Test initial loads 15. [ ] Additional Buildout changes? 1. [ ] Add here as found. ## Order of Bootstrapping in IAD2 1. [x] IP space: we need to know what networks we have internally and externally. - [x] Internal: 10.3.160->10.3.176 - [x] External: 38.145.60.0/24 2. [ ] Setup DNS space for the zones. - [x] Internal reverse zones. - [x] Internal forward zones. - [ ] External reverse zones. - [ ] External forward zones. 3. [x] Map internal network port allowances - [x] Prod to Build/Build to Prod - [x] Prod to QA/QA to Prod 4. [x] Map external network ports to internal - [x] External to Prod / Prod to External - [x] External to QA / QA to External 5. [ ] Back ground network setup - [x] Networking sets up vlans and wiring in top racks. - [ ] IAD2 firewall rules need to be setup for bastion host. - [x] ssh - [ ] https - [x] chrony - [ ] unbound/DNS - [ ] openvpn - [ ] IAD2 firewall rules need to be setup for general outbound access. - [ ] https - [ ] unbound/DNS - [ ] fedmsg - [ ] other outbound items? - [x] Set mgmt router to have dhcp using 10.3.160.* space - [ ] Power off all APC systems and then power on first dell system in rack to get it to talk dhcp 6. [ ] Get access to PDU's in racks {{Deferred}} - [ ] get ip addresses for rack 101 103 - [ ] get account and password - [ ] test login - [ ] power on systems 7. [x] Install initial hardware - [x] power on first virthost - [x] determine its dhcp address - [x] log into system idrac and change base password - [x] create admin account and give additional rights to it. - [x] give permanent ip address of 10.3.160.10 - [x] test that system goes to new ip address and works. - [x] install rhel-8 - [x] give host secondary ip address to box of bastion01.iad2. - [x] test external login abilities to this host - [x] test routing via this host from phx2 facility. - [x] set up any other temporary services on this host 8. [x] Install additional hardware - [x] power on next host - [x] determine its dhcp address - [x] log into system idrac and change base password - [x] create admin account and give additional rights to it. - [x] give permanent ip address for host in dns - [x] test that system goes to new ip address and works. - [x] install rhel-8 9. [ ] Install guests - [x] DHCP/TFTP from all Build/QA/etc networks routes to 10.3.163.10 ### IAD2 Build list: 1. [ ] Remaining virtual servers 1. [x] centos01 2. [x] centos02 3. [x] vmhost-x86-01 4. [x] vmhost-x86-02 5. [x] vmhost-x86-03 6. [x] vmhost-x86-04 7. [x] vmhost-x86-05 8. [x] vmhost-x86-06 9. [x] vmhost-x86-07 10. [x] bvmhost-x86-01 11. [x] bvmhost-x86-02 12. [x] bvmhost-x86-03 13. [x] bvmhost-x86-04 14. [x] bvmhost-x86-05 15. [x] bvmhost-x86-06 16. [ ] bvmhost-a64-01 (ampere01) 17. [ ] bvmhost-a64-02 (ampere02) 18. [x] bvmhost-x86-07 19. [ ] autosign01 (fed-cloud12) 20. [ ] sign-box (sign06) 21. [ ] bkernel (bkernel05?) 22. [ ] mustang01 (needs serial) 23. [ ] mustang02 (needs serial) 24. [ ] power08-01 (needs to exist) 25. [ ] power09-01 (needs to exist) 26. [ ] power08-02 (needs to exist) 27. [ ] power09-02 (needs to exist) 28. [ ] bvmhost-x86-08 (virthost05) 29. [ ] qvmhost-x86-01 (qa r640) 30. [x] qvmhost-x86-02 (qa r640) 31. [ ] qa-x86-01 (qa r640) 32. [ ] qa-a64-01 (ampere) 33. [ ] bvmhost-a64-03 (ampere) 34. [ ] bvmhost-a64-04 (ampere) 3. [ ] Critical infrastructure services (MVF list) - [x] bastion2 - [x] config-mgmt (batcave) - [x] rebuild bastion1 as proper virt-guest - [x] dns - [x] noc/dhcp - [x] log-server - [x] tang - [ ] sign-vault - [ ] sign-bridge - [ ] autosign - [x] certgetter - [ ] ipa cluster (rhel8 from rhel7) - [x] loopabull - [x] mirrormanager vm's - [x] noc01 4. [x] SPIKE: database servers - [x] db01 postgresql 12 / rhel8 - [x] db-koji01 postgresql 12 / rhel8 - [x] db-fas01 postgresql 12 / rhel8 - [x] db02 (mariadb for wiki) 5. [x] SPIKE: rabbitmq setup 6. [x] SPIKE: bring up openshift cluster 7. [ ] SPIKE: bring up koji and build infra as a temp staging environment to test the list of MVF builds to test to make sure that stuff works. [x] koji hubs [ ] koji builders [x] kojipkgs [x] bodhi-backend [ ] grobbisplittr [x] mbs [x] rawhide/branched composers [x] compose-x86-01 [x] compose-iot [ ] osbs [x] odcs [x] registry [x] pkgs [x] downloads [x] pdc 1. [x] set partition on the netapp 2. [x] mount them on the box 3. [x] run the services here for testing 8. [ ] SPIKE: bring up additional non-build service 1. [x] proxies 2. [ ] mailing lists 3. [x] backups 4. [x] download servers 5. [x] sundries 6. [x] value 7. [x] wiki 8. [x] bugzilla2fedmsg 9. [x] datagrepper 10. [x] datanommer-db 11. [ ] FMN (might just sync this over and adjust it) 9. [ ] SPIKE: openqa setup and testing 10. [ ] Evaluate the MVF with community member testing by sending to the lists with a feedback loop & closeout time ## vm / application install / initial configuration checklist non build: week of 2020-05-18 * [x] bastion01.phx2.fedoraproject.org => bastion01.iad2 * [x] bastion02.phx2.fedoraproject.org => bastion02.iad2 * [x] batcave01.phx2.fedoraproject.org => batcave01.iad2 * [x] blockerbugs01.phx2.fedoraproject.org - mostly up, but some kind of db error needs fixing * [x] bugzilla2fedmsg01.phx2.fedoraproject.org * [x] busgateway01.phx2.fedoraproject.org * [x] certgetter01.phx2.fedoraproject.org * [x] datagrepper01.phx2.fedoraproject.org - not working, needs investigation * [x] db01.phx2.fedoraproject.org => db01.iad2 * [x] db03.phx2.fedoraproject.org => db02.iad2 * [x] db-datanommer02.phx2.fedoraproject.org - loading db dump 2020-05-23 22UTC - took 6.5 hours * [x] db-fas01.phx2.fedoraproject.org => db-fas01.iad2 * [x] download01.phx2.fedoraproject.org => dl01.iad2 * [x] download02.phx2.fedoraproject.org => dl02.iad2 * [x] download03.phx2.fedoraproject.org => dl03.iad2 * [x] download04.phx2.fedoraproject.org => dl04.iad2 * [x] download05.phx2.fedoraproject.org => dl05.iad2 * [x] fedimg01.phx2.fedoraproject.org * [x] github2fedmsg01.phx2.fedoraproject.org * [n] grobisplitter01.phx2.fedoraproject.org - move to batcave01.iad2 * [x] ipa01.phx2.fedoraproject.org * [x] ipa02.phx2.fedoraproject.org * [x] log01.phx2.fedoraproject.org * [x] loopabull01.phx2.fedoraproject.org * [ ] mailman01.phx2.fedoraproject.org MIGRATE? * [x] memcached01.phx2.fedoraproject.org * [x] mm-backend01.phx2.fedoraproject.org * [x] mm-crawler01.phx2.fedoraproject.org * [x] mm-frontend01.phx2.fedoraproject.org * [x] mm-frontend-checkin01.phx2.fedoraproject.org * [x] noc01.phx2.fedoraproject.org => noc01.iad2 * [ ] notifs-backend01.phx2.fedoraproject.org MIGRATE? * [ ] notifs-web01.phx2.fedoraproject.org MIGRATE? * [x] ns03.phx2.fedoraproject.org => ns01.iad2 * [x] ns04.phx2.fedoraproject.org => ns02.iad2 * [x] oci-candidate-registry01.phx2.fedoraproject.org - can't connect to vpn, playbook fails * [x] oci-registry01.phx2.fedoraproject.org - can't connect to vpn, playbook fails * [x] os-control01.phx2.fedoraproject.org => os-control01.iad2 * [x] os-master01.phx2.fedoraproject.org => os-master01.iad2 * [x] os-master02.phx2.fedoraproject.org => os-master02.iad2 * [x] os-master03.phx2.fedoraproject.org => os-master03.iad2 * [x] os-node01.phx2.fedoraproject.org => os-node01.iad2 * [x] os-node02.phx2.fedoraproject.org => os-node02.iad2 * [x] os-node03.phx2.fedoraproject.org => os-node03.iad2 * [x] os-node04.phx2.fedoraproject.org => os-node04.iad2 * [x] os-node05.phx2.fedoraproject.org => os-node05.iad2 * [x] pdc-backend01.phx2.fedoraproject.org * [x] pdc-backend02.phx2.fedoraproject.org * [x] pdc-backend03.phx2.fedoraproject.org * [x] pdc-web01.phx2.fedoraproject.org * [x] pdc-web02.phx2.fedoraproject.org * [x] proxy01.phx2.fedoraproject.org => proxy01.iad2 * [x] proxy101.phx2.fedoraproject.org * [x] proxy10.phx2.fedoraproject.org * [x] proxy110.phx2.fedoraproject.org * [x] rabbitmq01.phx2.fedoraproject.org => rabbitmq01.iad2 * [x] rabbitmq02.phx2.fedoraproject.org => rabbitmq02.iad2 * [x] rabbitmq03.phx2.fedoraproject.org => rabbitmq03.iad2 * [x] secondary01.phx2.fedoraproject.org * [x] sundries01.phx2.fedoraproject.org => sundries01.iad2 * [x] tang01.phx2.fedoraproject.org => tang01.iad2 * [x] tang02.phx2.fedoraproject.org => tang02.iad2 * [x] value01.phx2.fedoraproject.org * [x] wiki01.phx2.fedoraproject.org * [!] zanata2fedmsg01.phx2.fedoraproject.org - do we sitll need this? build: week of 2020-05-25 * [x] backup01.iad2.fedoraproject.org * [x] compose-iot-01.phx2.fedoraproject.org * [x] compose-x86-01.phx2.fedoraproject.org * [x] bodhi-backend01.phx2.fedoraproject.org * [x] db-koji01.phx2.fedoraproject.org * [x] koji01.phx2.fedoraproject.org * [x] koji02.phx2.fedoraproject.org * [x] kojipkgs01.phx2.fedoraproject.org * [x] kojipkgs02.phx2.fedoraproject.org * [x] mbs-backend01.phx2.fedoraproject.org * [x] mbs-frontend01.phx2.fedoraproject.org * [x] odcs-backend01.phx2.fedoraproject.org - needs info from app owner * [x] odcs-frontend01.phx2.fedoraproject.org - needs info from app owner * [x] osbs-control01.phx2.fedoraproject.org * [x] osbs-master01.phx2.fedoraproject.org * [x] osbs-node01.phx2.fedoraproject.org * [x] osbs-node02.phx2.fedoraproject.org * [x] pkgs02.phx2.fedoraproject.org * [x] rawhide-composer.phx2.fedoraproject.org * [x] sign-bridge01.phx2.fedoraproject.org * [ ] buildvm-NN.phx2.fedoraproject.org (as many as fit) (bvmhost-x86-06/07, 16 each = 32 total) * [ ] buildvm-aarch64 (as many as fit) * [ ] buuldvm-ppc64le * [ ] buildvm-armv7 * [ ] buildvm-s390x (just need access confirmed) qa vms: - [ ] bastion-comm01.qa.fedoraproject.org These may be able to consolidate to 1: - [ ] db-qa01.qa.fedoraproject.org - [ ] db-qa02.qa.fedoraproject.org - [ ] db-qa03.qa.fedoraproject.org - [ ] openqa01.qa.fedoraproject.org - adamw taking? - [ ] openqa-stg01.qa.fedoraproject.org - adamw taking? - [ ] resultsdb01.qa.fedoraproject.org - need to figure out where this goes ## Old data ### April 13 Cloud to RDU-CC * Hardware was deracked and removed from PHX2 data centre 2020-04-14 * Hardware arrived at data-centre 2020-04-20 * Hardware was rack/stacked 2020-??-?? * Hardware was reinstalled 2020-??-?? | Serial No | Model | Mac Address | Current Hostname | New Hostname | | --------- | ----- | ----------- | ---------------- | ------------ | | ??? | juniper | ??? | | | | ??? | juniper | ??? | | | | ??? | OpenGear | ??? | rack 156 open | opengear02.rdu-cc. | | ??? | Mustang | ??? | ??? | ??? | | ??? | Mustang | ??? | ??? | ??? | | ??? | Power8 | ??? | ??? | ??? | | ??? | Dell R6550 | ??? | copr-vmhost01 | copr-vmhost01 | | ??? | Dell R6550 | ??? | copr-vmhost02 | copr-vmhost02 | | ??? | Dell R6550 | ??? | copr-vmhost03 | copr-vmhost03 | | ??? | Dell R6550 | ??? | copr-vmhost04 | copr-vmhost04 | | ??? | Len Ampr | ??? | cloud-a64 | cloud-a64 | | ??? | Dell R7550 | ??? | retrace03 | ???? | | ??? | Dell R630 | ??? | ??? | cloudvmhost-x86_64-01 | | ??? | Dell R630 | ??? | ??? | cloudvmhost-x86_64-02 | | ??? | Dell FX | ??? | lots-o-stuff | lots-o-stuff | | ??? | juniper | ??? | | | | ??? | juniper | ??? | | | | ??? | 10g switch | ??? | | | | ??? | Power8 | ??? | ??? | ??? | | ??? | Cavium 1 | ??? | ??? | ??? | | ??? | Cavium 2 | ??? | ??? | ??? | | ??? | Dell FX | ??? | lots-o-stuff | lots-o-stuff | | ??? | Storinator | ??? | storinator01 | storinator01 | ### April 13 MVF to IAD * Hardware was deracked and removed from PHX2 data centre 2020-04-14 * Hardware arrived at data-centre 2020-04-20 * Hardware was rack/stacked 2020-??-?? * Hardware was reinstalled 2020-??-?? | Serial No | Model | Mac Address | Current Hostname | New Hostname | | --------- | ----- | ----------- | ---------------- | ------------ | | ??? | OpenGear | ??? | rack 146 open | opengear01.rdu-cc. | | ??? | Dell R630 | B8:2A:72:FC:ED:22 | fed-cloud12 | vmh-x64-09.iad | | ??? | Dell R630 | B8:2A:72:FC:F2:2E | fed-cloud13 | vmh-x64-10.iad | | ??? | Len Ampr | E8:6A:64:39:18:99 | vhm-a-22 | vmh-a64-01.iad | | ??? | Len Ampr | E8:6A:64:39:18:85 | vhm-a-21 | vmh-a64-02.iad | | ??? | Dell R430 | ??? | sign02 | sign02 | | ??? | Dell R430 | ??? | bkernel04 | bkernel04 | | ??? | Mustang | ??? | ??? | ??? | | ??? | Mustang | ??? | ??? | ??? | | ??? | Power9 | ??? | ??? | ??? | | ??? | Power8 | ??? | ??? | ??? | | ??? | Len Ampr | E8:6A:64:97:6B:49 | oqa-a-02 | oqa-a-01.iad | | ??? | Dell R640 | ??? | ??? | vhm-qa-01.qa | | ??? | Dell R640 | ??? | ??? | vhm-qa-02.qa | | ??? | Dell R640 | ??? | ??? | oqa-x86-01.qa | | ??? | Dell R630 | ??? | fed-cloud-15 | oqa-x86-02.qa | | ??? | Power 9 | ??? | ??? | oqa-p64 | | ??? | Dell R630 | ??? | virthost05 | bvhm-02 |