# Current Live updated schedule for PHX2 -> IAD2 move ###### tags: `IAD2 Datacenter` ## Acronyms Used in Document MVFE - Minimal Viable Fedora Environment PHX2 - Chandler Arizona datacentre RDU-CC - Morrisville NC datacentre community cage IAD2 - Ashburn Equinix datacentre AAA - Authentication, Authorization, Auditing IPA - Identity, Policy, Audit FAS - Fedora Account System (older AAA solution) FreeIPA - Newer account system ### Shipment Equipment The following lists cover the hardware which is being shipped at specific dates and where the hardware is going. #### June 15th Fedora to IAD **note:** it looks like it will take 2 to 3 days to unrack the systems in PHX2. it will also take 1 week for the hardware to travel across the country. | Serial No | Model | Current Hostname | Current Rack | New Rack | New Hostname | | --------- | ----- | ---------------- | ------------ | -------- | ------------ | | ??? | Ampere | ??? | Rack 150 | 101 | ??? | | ??? | Ampere | ??? | Rack 147 | 101 | ??? | | ??? | Dell R630 | fed-cloud13 | Rack 157 | 101 | ??? | | ??? | Dell R430 | sign02 | Rack 148 | 01 | ??? | | ??? | Dell R430 | bkernel04 | Rack 150 | 101 | ??? | | ??? | Mustang | 3PBC-A0000Z | Rack 146 | 101 | ??? | | ??? | Mustang | 3PBD-A0004P | Rack 146 | 101 | ??? | | ??? | IBM Power 9 | ??? | Rack 147 | 101 | ??? | | ??? | IBM Power 8 | ??? | Rack 147 | 101 | ??? | | ??? | Dell R630 | virthost02.stg | Rack 153 | 102 | ??? | | ??? | Dell R630 | virthost01.stg | Rack 153 | 102 | ??? | | ??? | Dell R640 | virthost05 | Rack 154 | 102 | ??? | | ??? | Dell R640 | virthost04 | Rack 154 | 102 | ??? | | ??? | Dell R630 | virthost06 | Rack 153 | 102 | ??? | | ??? | Dell R630 | virthost02 | Rack 153 | 102 | ??? | | ??? | Dell R430 | sign01 | Rack 148 | 102 | ??? | | ??? | Dell R630 | backup01 | Rack 155 | 102 | ??? | | ??? | Dell R430 | sign01 | Rack 148 | 102 | ??? | | ??? | Dell R730 | virthost01 | Rack 153 | 102 | ??? | | ??? | Dell R630 | virthost12 | Rack 155 | 102 | ??? | | ??? | Dell R630 | virthost22 | Rack 154 | 102 | ??? | | ??? | Dell R630 | fed-cloud15 | Rack 157 | 102 | ??? | | ??? | Dell R630 | bvirthost01 | Rack 148 | 102 | ??? | | ??? | Dell R630 | bvirthost04 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirhost05 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirthost13 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirthost14 | Rack 149 | 102 | ??? | | ??? | Dell R630 | bvirthost15 | Rack 149 | 102 | ??? | | ??? | Dell R430 | bkernel03 | Rack 150 | 102 | ??? | | ??? | Dell R630 | virthost14 | Rack 154 | 102 | ??? | | ??? | Dell R630 | virthost19 | Rack 154 | 102 | ??? | | ??? | Dell R630 | virthost21 | Rack 154 | 102 | ??? | | ??? | Dell R630 | autosign | Rack 149 | 102 | ??? | | ??? | Dell FX2 | Builders FX | Rack 150 | 102 | ??? | | ??? | IBM Power 9 | ??? | Rack 152 | 102 | ??? | | ??? | Ibm Power 8 | ??? | Rack 147 | 102 | ??? | | ??? | Ibm Power 9 | ??? | Rack 147 | 103 | ??? | | ??? | Ampere | ??? | Rack 147 | 103 | ??? | | ??? | Ampere | ??? | Rack 157 | 103 | ??? | | ??? | Ampere | ??? | Rack 149 | 103 | ??? | | ??? | Cavium | ??? | Rack 146 | 103 | ??? | | ??? | Ampere | ??? | Rack 147 | 103 | ??? | | ??? | Ampere | ??? | Rack 147 | 103 | ??? | | ?RT | Ampere | ?RT | Rack 147 | 103 | ??? | | ?LT | Ampere | ?LT | Rack 149 | 103 | ??? | | ?RX | Ampere | ?RX | Rack 149 | 103 | ??? | | ?NG | Ampere | ?NG | Rack 150 | 105 | ??? | | ?LV | Ampere | ?LV | Rack 150 | 105 | ??? | | ?NT | | ?NT | Rack 150 | 105 | ??? | | ?RH | Ampere | ?RH | Rack 150 | 104 | ??? | | ??? | Dell R630 | virthost04.stg | Rack 154 | 104 | ??? | | ??? | Dell R630 | virthost03.stg | Rack 153 | 104 | ??? | | ?V2 | Ampere | ?V2 | Rack 149 | 104 | ??? | | ??? | Dell R630 | vhost-comm01 | Rack 152 | 104 | ??? | | ??? | Dell R640 | qa01 | Rack 151 | 104 | ??? | | ?MN | Ampere | ?MN | Rack 147 | 104 | ??? | | ?P4 | Ampere | ?P4 | Rack 147 | 104 | ??? | | ?R6 | Ampere | ?R6 | Rack 147 | 104 | ??? | | ??? | Ampere | ??? | Rack 150 | 10104 | ??? | | ??? | Dell FX2 | Builders FX | Rack 148 | 104 | ??? | | ??? | IBM Power 9 | ??? | Rack 152 | 104 | ??? | | ??? | IBM Power 8 | ??? | Rack 147 | 104 | ??? | | ??? | OpenGear | Serial | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A00053 | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0009P | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0009B | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0002B | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0008J | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0006W | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0004G | Rack 146 | 105 | ??? | | ??? | Mustang | 3PBD-A0007T | Rack 146 | 105 | ??? | | ??? | IBM Power 8 | ??? | Rack 147 | 105 | ??? | | ??? | IBM Power 8 | ??? | Rack 152 | 105 | ??? | ## Phase 1 ### Week 00 (2020-03-02 -> 2020-03-08) - [x] Hardware shipping needs to be planned out - [x] Rack layouts for IAD2 and RDU-CC need to be finalized - [x] Work with RH IT on what they need in network diagrams ### Week 01 (2020-03-09 -> 2020-03-15) - [x] Get RHEL-8 virthost built - [x] Deliver network diagrams to RHIT - [x] Announce at downtime for communishift - [x] ~~Try to find a place for communishift to temp run? - no~~ ### Week 02 (2020-03-16 -> 2020-03-22) - [x] 2020-03-17 Fedora 32 Beta Date ### Week 03 (2020-03-23 -> 2020-03-29) - [x] Re-Announce downtime for communishift - [x] Get updated tasks and timeline set out - [x] Work out COVID-19 Contingencies ### Week 04 (2020-03-30 -> 2020-04-05) - [x] Virthost03 skunkworks (kevin) - [x] RHEL-8 install - [x] test encryption over bridge - [x] test TEAM network - [x] Virthost05 skunkworks (kevin) - [x] uefi EL8 install instructions - [x] secure boot. - [x] Re-Announce downtime for communishift - [x] Set up noggin instance in AWS to replace communishift (kevin) ### Week 05 (2020-04-06 -> 2020-04-12) - [x] Create DNS templates for IAD2 - [x] Create template DHCP for mgmt hosts - [x] Add more items to ship to IAD2 - [x] Get network layout for RDU-CC finalized - [x] public ip address count and space needed - [x] mgmt network 172.23.1.??/24 - [x] private network for openshift backnodes 172.23.2.??/24 - [x] Collect all mgmt mac addresses - [x] Collect all hardware mac addresses for debugging - [x] Set mgmt interfaces to DHCP before getting shipped - [x] opengear - [x] amperes - [ ] power hardware - [x] 2020-04-09 Fedora 32 Final Freeze - [x] Power needs to be ready in RDU-CC cage - [x] Option A/B meeting for Phase 2 - [x] Option A - We ship items and continue with A - [ ] Option B - We go with planning out move of items within PHX2 and look at the larger move to IAD2 next year ## Phase 2 ### Week 06 (2020-04-13 -> 2020-04-19) - [x] Put move details into status.fedoraproject.org so people can see what is going on. - [x] Take down and ship communishift hardware to RDU-CC - [x] Take down and ship extra hardware to IAD2 - [x] Begin takedown of communishift hardware - [x] Communishift 13th April - 1st May - [x] Power off systems in racks - [x] Work with logistics for pack and move - [x] Power should be on by 17th - this is power to RDU-CC ### Week 07 (2020-04-20 -> 2020-04-26) - [x] Hardware should arrive at RDU-CC - [x] Rerack from 27th 1st may - [x] Hardware should arrive at IAD2 - [x] work with Shauns team to get systems racked/stacked in 101 ### Week 08 (2020-04-27 -> 2020-05-03) - [x] Work out temporary root password for installs - [x] 2020-04-28 Fedora 32 release - ~~[ ] set ip address in mac~~ - ~~[ ] set up admin user~~ - ~~[ ] set ipmi and serial over lan access~~ ### Week 09 (2020-05-04 -> 2020-05-10) - [x] Write howto on Dell mgmt setup (smooge) - [x] IAD2 work (see IAD2 bootstrap) [x] find all expected hardware [x] Install new hardware ### Week 10 (2020-05-11 -> 2020-05-17) - [x] IAD2 work (see IAD2 bootstrap) ### Week 11 (2020-05-18 -> 2020-05-24) - [x] IAD2 work (see IAD2 bootstrap) - [ ] RDU Bootstrap - [ ] Do any items in RDU-CC that time allows - [ ] Work with IT on any network layout issues left for RDU2 site. - [ ] communishift proxies with private + external interfaces - [x] Give internal IT new DNS servers ip address - [x] Give internal IT new SMTP server ip address - [ ] Move final virthost-cc boxes into new RDU-CC racks ### Week 12 (2020-05-25 -> 2020-05-31) - [x] IAD2 work (see IAD2 bootstrap) - [x] Rack 103 must be up. Get openqa and other systems in. ## Phase 4 ### Week 13 (2020-06-01 -> 2020-06-07) - [ ] Final checklist and approval of IAD2 MVFE - [ ] Test email routing through IAD2 proxies - [x] Test www proxies - [ ] Test builds - [x] Test openvpn - [x] Test rsync - [x] Test route to s390x - [ ] Test route to bugzilla STOMP message bus - [x] make sure nagios says everything is green - [x] Bring up openqa in IAD2 - [ ] Change Fedora DNS to shorter times for major change - [ ] GO/NO-GO meeting: - [x] Option A: Ship everything left in PHX2 to IAD2 on June 15th - [ ] Option B: Look at internal move of PHX2 equipment #### Fedora Datacenter Move week (and week before) - detailed tasks #### Notes: * db switcharoo means: take service down in phx2, dump db in phx2, copy dump to iad2, load dump in iad2, bring up service in iad2, test, switch dns. * snapmirror switcharoo means: take service down in phx2, umount all users of volume, sync final data via snapmirror, break snapmirror and make rw in iad2. oustanding questions: - Did we miss any applications/services? - When do we want to move openqa? It's pretty self contained. #### 2020-06-01 - monday Testing day [Fedora Datacenter Test Plan ](/op6N_nIaR7aMzw9Ib-sDAQ) #### 2020-06-02 - tuesday Penaultimate Testing day [Fedora Datacenter Test Plan ](/op6N_nIaR7aMzw9Ib-sDAQ) #### 2020-06-03 - wed Last testing day - [x] 16:00 UTC: move dl.fedoraproject.org over to iad2 - [x] change dns - [x] disable rsync/httpd on phx2 download servers - [x] destroy all phx2 download vm's. - [x] send note to mirror-admins list in case someone finds problems - [x] Test everything we can with a local /etc/hosts file and vpn pointing to iad bastions. - [x] 21:00 UTC: Add all iad2 vpn hosts to vpn.fedoraproject.org dns with a -iad2 to the name and in the 192.168.20.x net. #### 2020-06-04 - thursday Batcave and wiki day ( testing after this point will be difficult as vm's are connected to the phx2 vpn) - [x] 15:00 UTC(start) switch to using batcave01.iad2 - [x] shutdown access to batcave01.phx2 - [x] final sync of all data - [x] /home - [x] /git - [x] /srv/* --exclude pub - [x] /root - [x] snapmirror switcharoo: fedora_app WARNING: builds need this for rhel things, no epel builds while it's changing, wiki needs this for attachements - [x] remount old volume in phx2 on kojipkgs*.phx2 to keep epel builds working - [x] test ansible-playbook runs from batcave01.iad2 - [x] change dns for infrastructure.fedoraproject.org to batcave01.iad2 - [x] power off batcave01.phx2 vm - [x] have all iad2 vpn hosts reconnect to bastion01.phx2 so they can take traffic as we switch applications over to them. - [/] change the "IAD2" view of the fedoraproject.org zone to use PHX2/external things for rabbitmq/etc. ie, where we have if IAD2, change to match PHX2 if This allows iad2 services to connect to phx2 ones until we move those. - [x] set all dns zones to 5m ttl - [ ] ~~wiki initial attempt - rolled back due to auth issue~~ #### 2020-06-05 - friday Zodbot / meeting day - [x] wiki (needs fedora_app for attachments) - [x] take wiki*phx2 vms down - [x] db switcharoo: db03/fpo-mediawiki (note this is mariadb) - [x] upgrade run (wiki01.iad2 is newer than wiki01.phx2) - [x] test via local curl and the like - [x] change haproxy to use wiki01-iad2 (so in proxies -> bastion01.phx2 -> wiki01.iad2) - [ ] grobbisplitter - move to batcave01.iad2 #### 2020-06-06 - saturday - [x] migrate value01 / zodbot - [x] pick time with no meetings planned - [x] stop zodbot/httpd on value01.phx2 - [x] rsync data from value01.phx2 to value01.iad2 - [x] bring up zodbot / fedmsg-irc from value01.iad2 - [x] change valu01 in proxies to value01-iad2 #### 2020-06-07 - sunday - [x] disable rawhide composes after sundays completes - [x] confirm no new packages will be added, no unretirements processed. (mail sent to all processors) - [x] disable backups on backup01.phx2 ### Week 14 (2020-06-08 -> 2020-06-14) - [ ] Move logical infrastructure to IAD2 MVFE #### 2020-06-08 - monday Fedora messaging, PDC, mirrormanager and authentication day - [x] 15:00 UTC: *.stg.fedoraproject.org shut down and commented in ansible inventory. - [x] 15:00 UTC: *.stg.fedoraproject.org virthosts reconfigured for iad2 and powered off. - [x] fedora-messaging/fedmsg buses * [x] make sure iad2 cluster has all vhosts and queues needed * [x] shutdown phx2 cluster * [x] change rabbitmq.fedoraproject.org in dns to point to iad2 cluster * [x] make sure openshift iad2 messaging-bridges are working * [x] github2fedmsg * [x] bugzilla2fedmsg * x ] take down vm's, remove from ansible and clean from nagios - [x] migrate notifs-backend01/notifs-web01 vms * [x] shut down vms * [x] copy storage and libvirt xml over * [x] db switcharoo: db01/notifs * [x] bring up and reconfigure for new network - [x] migrate pdc instances !WARNING: commits will fail while pdc is down, do this fast * [x] stop services in phx2 * [x] dump database / restore database * [x] bring up services in iad2 and test * [x] switch haproxy to point to pdc-web01-iad2 - [x] mirrormanager * [x] shut down service in phx2 * [x] db switcharoo: db01/mirrormanager2 * [x] start things in iad2 * [x] test * [x] point service in haproxy to iad2 * [x] stop vm's and remove from inventory, update nagios - [x] authentication stack * [x] stop services in phx2: fas, ipsilon, ipa * [x] bring up fas in iad2 openshift * [x] bring up ipsilon in iad2 openshift * [x] bring up ipa vm's in iad2 * [x] db switcharoo db-fas01/* * [x] switch httpd proxy config to point to -iad2 versions * [ ] disable ipa replication to/from ipa01.phx2 and ipa02.phx2 (can be done at any time, just power off phx2 IPA boxes for now) Due to some entanglements, we needed to move our entire openshift cluster today. Some apps are still operating ok in phx2, but aren't reachable via the web anymore. - [x] bodhi database - [x] dump bodhi2 db on db01.phx2 time: - [x] transfer db dump to db01.iad2 time: - [x] load dump into db01.iad2 time: - [x] bodhi db dump/restore done bring up bodhi for testing. - [x] bodhi openshift pods bring up - [x] coreos* openshift pods bring up - [x] fedora-ostree-pruner - [x] compose-tracker - [x] openshift apps - [x] asknot - [x] distgit-bugzilla-sync - [x] greenwave - [x] koschei (scheduling off) - [x] mdapi - [x] switcharoo fedora_prod_mdapi - [x] message-tagging-service - [x] monitor-gating - [x] release-monitoring - [x] the-new-hotness - [x] waiverdb - [x] power off misc hardware that is not being shipped. - [x] kernel02 - [x] others? #### 2020-06-09 - tuesday Buildsystem and it's friends day: * - [x] 00:01 UTC: disable bodhi push after it runs * - [x] 15:00 UTC: disable all builders and show koji offline message. * - [x] 15:00 UTC: cancel all builds still in progress * - [x] 15:00 UTC: disable all builders. * - [x] 15:00 UTC: build system goes down. (src/koji/kojipkgs/bodhi/master mirrors/registry/builders) * - [x] 15:00 UTC: begin netapp transition for: * - [x] fedora_koji * - [x] fedora_koji_archive* * - [x] fedora_odcs * - [x] fedora_ostree_content * - [x] fedora_ftp * - [x] fedora_sourcecache * - [x] oci-registry - [x] koji.fedoraproject.org * - [x] db-koji01 database * - [x] start pg_dumpall on db-koji01.phx2 time: 2020-06-09 15:30 * - [x] transfer to db-koji01.iad2 time: 2020-06-09 22:45 * - [x] load pg_dump into db-koji01.iad2 time: * - [x] db-koji01.iad2 is loaded and fedora_koji is ready, bring up koji for testing. * - [x] run 1.21.0 koji migrations on db * - [x] run sql modify script to clean things up * - [x] add all new iad2 builders hosts to the db * - [x] adjust hub config for channels and adjust builders for channels. * - [x] set hub to LockOut = true and test with some admin commands * - [x] unset lockout * - [x] test via /etc/hosts and iad2 proxies. * - [x] src.fedoraproject.org * - [x] rsync of pkgs02.phx2 git data to pkgs01.iad2 time: * - [x] src.fedoraproject.org db * [x] db dump started * [x] transfer * [x] load into db01 * - [x] fedora_sourcecache is ready and pkgs sync is done bring up pagure on pkgs for testing. * - [x] test via /etc/hosts * - [x] pkgs.fedoraproject.org dns (for ssh pushing) * - [x] src.fp.o dns (for https/main service) * - [x] signing * - [x] autosign * - [x] sigul rpm ready from patrick * - [x] sign-bridge01.iad2 playbook run and config, bridge running * - [x] sign-vault01.iad2 * - [x] config adjusted for new sigul * - [x] playbook run * - [x] Old vault data loaded and unlocked. * - [x] odcs * - [x] odcs db dump/restore done * - [x] fedora_ftp done koji done, * - [x] bring up odcs for testing. * - [ ] osbs * - [x] resultsdb * - [x] resultsdb database * - [x] dump resultsdb01 db on db-qa01.qa time: * - [x] transfer db dump to db01.iad2 time: * - [x] load dump into db01.iad2 time: * - [ ] reconfigure to use db01 and bring up service. - [ ] registry * - [ ] oci_registry cut over: mount as needed, switch dns to iad2. - [x] mbs * - [x] mbs db dump/restore done * - [ ] bring up mbs for testing. * - [ ] repoint s390x builders to new koji hub. * - [x] upgrade s390x builders to f32 - [x] fedora_koji_archive* cut over: mount needed hosts in iad2. - [x] fedora_ftp cut over: mount ro on downloads and rw on composers/bodhi validation: * - [x] test koji scratch build * - [x] test koji real build * - [x] test koji container build * - [x] test koji module build * - [x] test signing * - [x] test bodhi updates push * - [x] switch proxy httpd balancer to point to koji to iad2 * - [x] switch proxy httpd balancer to point to dns for pkgs/src to iad2 * - [x] switch proxy httpd balancer to point to bodhi to iad2 * - [x] fire off a rawhide compose on compose-rawhide01.iad2 (keep cron off) #### 2020-06-10 - wed Openshift apps, mailman/lists and datagrepper/datanommer day - [x] 16:00 UTC backups - [x] switcharoo fedora_backups volume - [x] switcharoo openshift_prod_mdapi - [ ] mailman/lists - [x] take down mailman01.phx2 - [x] Migrate it and all data to iad2 - [x] move pointer to iad2. - [ ] datagrepper / datanommer - [x] take down service in phx2. - [x] switch busgateway over to iad2 - [x] dump db and reload in iad2 - [x] switch dns to iad2 service. #### 2020-06-11 - thursday Website builders, blockerbugs, and elections day: - [x] openqa - [x] switcharoo volume - [x] db switcharoo - [/] rsync assets from old server - [x] switch in dns - [x] docsbuilding - [x] switch fedora_prod_docs - [x] websites building - [x] switch fedora_prod_websites - [x] reviewstats - [x] switch fedora_prod_reviewstats - [x] fedimg - [x] blockerbugs - [x] dump/restore blockerbugs db - [x] bring up blockerbugs for testing in iad2. - [x] switch blockerbugs dns to iad2. - [x] kerneltest vm/app - [x] dump/restore db - [x] bring up in iad2 and test - [x] switch in vpn/dns - [x] if all looks well, switch all vpn hosts to point to bastion01.iad2 - [/] shutdown bastion hosts in phx2. #### 2020-06-12 - friday Stomp out fires day: - [x] fix bugs / issues as found - [x] process new packages - [x] allow retirements - [x] make sure everything in phx2 is off and ready to ship next week - [ ] test and re-enable backups on backup01.iad2. ### Week 15 (2020-06-15 -> 2020-06-21) - [x] Shutdown of PHX2 racks - [x] Removal of systems from PHX2 and shipment to IAD2 - [x] Travel of equipment to IAD2 - [ ] PHX2 -- go over remaining hardware to recycle ### Week 16 (2020-06-22 -> 2020-06-28) - [x] Most likely time for hardware arrival - [x] Racking and stacking of IAD2 equipment - [x] Set up mgmt interfaces - [ ] Do initial hardware installs to RHEL8 ### Week 17 (2020-06-29 -> 2020-07-05) - [ ] Finish initial hardware installs - [ ] Bring up additional builders ### Week 18 (2020-07-06 -> 2020-07-12) - [ ] Bring up additional services - [ ] Move mgmt interfaces back to static ### Week 19 (2020-07-13 -> 2020-07-19) - [ ] Probably more work at data centre ### Week 20 (2020-07-20 -> 2020-07-26) - [ ] Sign off on work completed ### Week 21 (2020-07-27 -> 2020-08-02) - [ ] A miracle occurs *Several in fact* ### Week 22 (2020-08-03 -> 2020-08-09) ??? - [ ] Mass Rebuild for Fedora 33 starts - [ ] All systems must be up and running - [ ] Production needs to be normal - [ ] FlockToFedora 2020 (now virtual) - [ ] profit HISTORICAL SECTION BELOW HERE ## Order of Bootstrapping in RDU-CC (Deferred Until After IAD2) 1. [x] Get power setup for racks 2. [x] Get items shipped from PHX2 1. [x] Racks 2. [x] Systems 3. [x] Get systems installed 1. [x] Racks 2. [x] Systems 3. [x] Inventory systems and confirm 4. [x] Write up wire spreadsheet for networks 4. [x] SPIKE: Get switches wired into master router. 1. [x] ~~ex3400 to mgmt vlan~~ 2. [x] ex4300 ports 1-24 to production vlan 3. [x] ex4300 ports 25-36 to mgmt vlan 4. [x] ex4300 ports 37-48 to storage vlan 5. [x] ex5??? 10 gig ports to prod vlan 5. [x] Prepare bootstrap 1. [x] make RHEL-8.2 usb stick 2. [x] make CentOS-8.1 usb stick 3. [x] get mask and gloves for datacenter visit 6. [ ] SPIKE: Build out bastion server (old vhost-s390) 1. [x] wire idrac to mgmt vlan 2. [x] configure idrac with ip address 3. [x] wire eth0 to prod network 4. [x] wire eth1 to mgmt vlan 5. [x] install RHEL-8.1 onto hardware 6. [ ] build a openvpn network 7. [ ] SPIKE: Mgmt interfaces 1. [x] wire mgmt to top switch. 2. [ ] power on hardware 3. [ ] go into bios and configure the ip address 4. [ ] test that mgmt is reachable from bastion-rdu-cc 8. [ ] SPIKE: Bring up vger 1. [ ] wire hardware to front switch 2. [ ] see if hardware works. 3. [ ] give mgmt login to 4. [ ] call in hardware repairs 9. [ ] SPIKE: Bring up retrace 1. [ ] wire hardware into top and back switch 2. [ ] see if hardware works 3. [ ] log in via physical console and configure ip address 4. [ ] test and fix ansible 10. [ ] Bring up storinator01 up 1. [ ] log into console 2. [ ] change ip addresses to proper ones 3. [ ] test storage and data 11. [ ] Bring up vmhost-rdu-cc-05 up 1. [x] Reinstall hardware with RHEL-8.1 2. [ ] Start deploying guests as needed 12. [ ] Move over vmhost-rdu-cc-01 -> vmhost-rdu-cc-04 from other racks to A06 1. [ ] Connect mgmt to mgmt network 2. [ ] Give mgmt ip address via BIOS 3. [ ] Connect eth0 to external network 4. [ ] Connect eth1 to internal network 5. [ ] Configure host to have br1 ip address 13. [ ] Build noc03 for rdu03 1. [ ] Configure out a dhcp on eth1 2. [ ] Configure out a tftp 3. [ ] Mirror rhel8 and openshift bits 14. [ ] OpenShift 4.x Install 1. [ ] Bring up proxy front ends 2. [ ] Bring up etcd systems 3. [ ] Begin install of dell fx systems 4. [ ] Test initial loads 15. [ ] Additional Buildout changes? 1. [ ] Add here as found. ## Order of Bootstrapping in IAD2 1. [x] IP space: we need to know what networks we have internally and externally. - [x] Internal: 10.3.160->10.3.176 - [x] External: 38.145.60.0/24 2. [ ] Setup DNS space for the zones. - [x] Internal reverse zones. - [x] Internal forward zones. - [x] External reverse zones. - [x] External forward zones. 3. [x] Map internal network port allowances - [x] Prod to Build/Build to Prod - [x] Prod to QA/QA to Prod 4. [x] Map external network ports to internal - [x] External to Prod / Prod to External - [x] External to QA / QA to External 5. [ ] Back ground network setup - [x] Networking sets up vlans and wiring in top racks. - [x] IAD2 firewall rules need to be setup for bastion host. - [x] ssh - [x] https - [x] chrony - [x] unbound/DNS - [x] openvpn - [x] IAD2 firewall rules need to be setup for general outbound access. - [x] https - [x] unbound/DNS - [x] fedmsg - [x] other outbound items? - [x] Set mgmt router to have dhcp using 10.3.160.* space 6. [ ] Get access to PDU's in racks {{Deferred}} - [ ] get ip addresses for rack 101 103 - [ ] get account and password - [ ] test login - [ ] power on systems 7. [x] Install initial hardware - [x] power on first virthost - [x] determine its dhcp address - [x] log into system idrac and change base password - [x] create admin account and give additional rights to it. - [x] give permanent ip address of 10.3.160.10 - [x] test that system goes to new ip address and works. - [x] install rhel-8 - [x] give host secondary ip address to box of bastion01.iad2. - [x] test external login abilities to this host - [x] test routing via this host from phx2 facility. - [x] set up any other temporary services on this host 8. [x] Install additional hardware - [x] power on next host - [x] determine its dhcp address - [x] log into system idrac and change base password - [x] create admin account and give additional rights to it. - [x] give permanent ip address for host in dns - [x] test that system goes to new ip address and works. - [x] install rhel-8 9. [ ] Install guests - [x] DHCP/TFTP from all Build/QA/etc networks routes to 10.3.163.10 ### IAD2 Build list: 1. [ ] Remaining virtual servers 1. [x] centos01 2. [x] centos02 3. [x] vmhost-x86-01 4. [x] vmhost-x86-02 5. [x] vmhost-x86-03 6. [x] vmhost-x86-04 7. [x] vmhost-x86-05 8. [x] vmhost-x86-06 9. [x] vmhost-x86-07 10. [x] bvmhost-x86-01 11. [x] bvmhost-x86-02 12. [x] bvmhost-x86-03 13. [x] bvmhost-x86-04 14. [x] bvmhost-x86-05 15. [x] bvmhost-x86-06 16. [x] bvmhost-a64-01 (ampere01) 17. [x] bvmhost-a64-02 (ampere02) 18. [x] bvmhost-x86-07 19. [x] autosign01 (fed-cloud12) 20. [x] sign-box (sign06) 21. [x] bkernel (bkernel05?) 22. [x] mustang01 23. [ ] mustang02 (needs serial) 24. [ ] power08-01 [hardware broken] 25. [x] power09-01 26. [ ] power08-02 [hardware broken] 27. [x] power09-02 29. [ ] qvmhost-x86-01 [hardware broken] 30. [x] qvmhost-x86-02 (qa r640) 31. [x] qa-x86-01 (qa r640) 32. [x] qa-a64-01 (ampere) 33. [x] bvmhost-a64-03 (ampere) 34. [x] bvmhost-a64-04 (ampere) 3. [ ] Critical infrastructure services (MVF list) - [x] bastion2 - [x] config-mgmt (batcave) - [x] rebuild bastion1 as proper virt-guest - [x] dns - [x] noc/dhcp - [x] log-server - [x] tang - [ ] sign-vault - [ ] sign-bridge - [ ] autosign - [x] certgetter - [ ] ipa cluster (rhel8 from rhel7) - [x] loopabull - [x] mirrormanager vm's - [x] noc01 4. [x] SPIKE: database servers - [x] db01 postgresql 12 / rhel8 - [x] db-koji01 postgresql 12 / rhel8 - [x] db-fas01 postgresql 12 / rhel8 - [x] db02 (mariadb for wiki) 5. [x] SPIKE: rabbitmq setup 6. [x] SPIKE: bring up openshift cluster 7. [ ] SPIKE: bring up koji and build infra as a temp staging environment to test the list of MVF builds to test to make sure that stuff works. [x] koji hubs [ ] koji builders [x] kojipkgs [x] bodhi-backend [ ] grobbisplittr [x] mbs [x] rawhide/branched composers [x] compose-x86-01 [x] compose-iot [ ] osbs [x] odcs [x] registry [x] pkgs [x] downloads [x] pdc 1. [x] set partition on the netapp 2. [x] mount them on the box 3. [x] run the services here for testing 8. [ ] SPIKE: bring up additional non-build service 1. [x] proxies 2. [ ] mailing lists 3. [x] backups 4. [x] download servers 5. [x] sundries 6. [x] value 7. [x] wiki 8. [x] bugzilla2fedmsg 9. [x] datagrepper 10. [x] datanommer-db 11. [ ] FMN (might just sync this over and adjust it) 9. [ ] SPIKE: openqa setup and testing 10. [ ] Evaluate the MVF with community member testing by sending to the lists with a feedback loop & closeout time ## vm / application install / initial configuration checklist non build: week of 2020-06-01 * [x] bastion01.phx2.fedoraproject.org => bastion01.iad2 * [x] bastion02.phx2.fedoraproject.org => bastion02.iad2 * [x] batcave01.phx2.fedoraproject.org => batcave01.iad2 * [x] blockerbugs01.phx2.fedoraproject.org - mostly up, but some kind of db error needs fixing * [x] bugzilla2fedmsg01.phx2.fedoraproject.org * [x] busgateway01.phx2.fedoraproject.org * [x] certgetter01.phx2.fedoraproject.org * [x] datagrepper01.phx2.fedoraproject.org - not working, needs investigation * [x] db01.phx2.fedoraproject.org => db01.iad2 * [x] db03.phx2.fedoraproject.org => db02.iad2 * [x] db-datanommer02.phx2.fedoraproject.org - loading db dump 2020-05-23 22UTC - took 6.5 hours * [x] db-fas01.phx2.fedoraproject.org => db-fas01.iad2 * [x] download01.phx2.fedoraproject.org => dl01.iad2 * [x] download02.phx2.fedoraproject.org => dl02.iad2 * [x] download03.phx2.fedoraproject.org => dl03.iad2 * [x] download04.phx2.fedoraproject.org => dl04.iad2 * [x] download05.phx2.fedoraproject.org => dl05.iad2 * [x] fedimg01.phx2.fedoraproject.org * [x] github2fedmsg01.phx2.fedoraproject.org * [n] grobisplitter01.phx2.fedoraproject.org - move to batcave01.iad2 * [x] ipa01.phx2.fedoraproject.org * [x] ipa02.phx2.fedoraproject.org * [x] log01.phx2.fedoraproject.org * [x] loopabull01.phx2.fedoraproject.org * [ ] mailman01.phx2.fedoraproject.org MIGRATE? * [x] memcached01.phx2.fedoraproject.org * [x] mm-backend01.phx2.fedoraproject.org * [x] mm-crawler01.phx2.fedoraproject.org * [x] mm-frontend01.phx2.fedoraproject.org * [x] mm-frontend-checkin01.phx2.fedoraproject.org * [x] noc01.phx2.fedoraproject.org => noc01.iad2 * [ ] notifs-backend01.phx2.fedoraproject.org MIGRATE? * [ ] notifs-web01.phx2.fedoraproject.org MIGRATE? * [x] ns03.phx2.fedoraproject.org => ns01.iad2 * [x] ns04.phx2.fedoraproject.org => ns02.iad2 * [x] oci-candidate-registry01.phx2.fedoraproject.org - can't connect to vpn, playbook fails * [x] oci-registry01.phx2.fedoraproject.org - can't connect to vpn, playbook fails * [x] os-control01.phx2.fedoraproject.org => os-control01.iad2 * [x] os-master01.phx2.fedoraproject.org => os-master01.iad2 * [x] os-master02.phx2.fedoraproject.org => os-master02.iad2 * [x] os-master03.phx2.fedoraproject.org => os-master03.iad2 * [x] os-node01.phx2.fedoraproject.org => os-node01.iad2 * [x] os-node02.phx2.fedoraproject.org => os-node02.iad2 * [x] os-node03.phx2.fedoraproject.org => os-node03.iad2 * [x] os-node04.phx2.fedoraproject.org => os-node04.iad2 * [x] os-node05.phx2.fedoraproject.org => os-node05.iad2 * [x] pdc-backend01.phx2.fedoraproject.org * [x] pdc-backend02.phx2.fedoraproject.org * [x] pdc-backend03.phx2.fedoraproject.org * [x] pdc-web01.phx2.fedoraproject.org * [x] pdc-web02.phx2.fedoraproject.org * [x] proxy01.phx2.fedoraproject.org => proxy01.iad2 * [x] proxy101.phx2.fedoraproject.org * [x] proxy10.phx2.fedoraproject.org * [x] proxy110.phx2.fedoraproject.org * [x] rabbitmq01.phx2.fedoraproject.org => rabbitmq01.iad2 * [x] rabbitmq02.phx2.fedoraproject.org => rabbitmq02.iad2 * [x] rabbitmq03.phx2.fedoraproject.org => rabbitmq03.iad2 * [x] secondary01.phx2.fedoraproject.org * [x] sundries01.phx2.fedoraproject.org => sundries01.iad2 * [x] tang01.phx2.fedoraproject.org => tang01.iad2 * [x] tang02.phx2.fedoraproject.org => tang02.iad2 * [x] value01.phx2.fedoraproject.org * [x] wiki01.phx2.fedoraproject.org * [!] zanata2fedmsg01.phx2.fedoraproject.org - do we sitll need this? build: week of 2020-06-08 * [x] backup01.iad2.fedoraproject.org * [x] compose-iot-01.phx2.fedoraproject.org * [x] compose-x86-01.phx2.fedoraproject.org * [x] bodhi-backend01.phx2.fedoraproject.org * [x] db-koji01.phx2.fedoraproject.org * [x] koji01.phx2.fedoraproject.org * [x] koji02.phx2.fedoraproject.org * [x] kojipkgs01.phx2.fedoraproject.org * [x] kojipkgs02.phx2.fedoraproject.org * [x] mbs-backend01.phx2.fedoraproject.org * [x] mbs-frontend01.phx2.fedoraproject.org * [x] odcs-backend01.phx2.fedoraproject.org - needs info from app owner * [x] odcs-frontend01.phx2.fedoraproject.org - needs info from app owner * [x] osbs-control01.phx2.fedoraproject.org * [x] osbs-master01.phx2.fedoraproject.org * [x] osbs-node01.phx2.fedoraproject.org * [x] osbs-node02.phx2.fedoraproject.org * [x] pkgs02.phx2.fedoraproject.org * [x] rawhide-composer.phx2.fedoraproject.org * [x] sign-bridge01.phx2.fedoraproject.org * [ ] buildvm-NN.phx2.fedoraproject.org (as many as fit) (bvmhost-x86-06/07, 16 each = 32 total) * [ ] buildvm-aarch64 (as many as fit) * [ ] buuldvm-ppc64le * [ ] buildvm-armv7 * [ ] buildvm-s390x (just need access confirmed) qa vms: - [ ] bastion-comm01.qa.fedoraproject.org These may be able to consolidate to 1: - [ ] db-qa01.qa.fedoraproject.org - [ ] db-qa02.qa.fedoraproject.org - [ ] db-qa03.qa.fedoraproject.org - [ ] openqa01.qa.fedoraproject.org - adamw taking? - [ ] openqa-stg01.qa.fedoraproject.org - adamw taking? - [ ] resultsdb01.qa.fedoraproject.org - need to figure out where this goes ## Old data ### April 13 Cloud to RDU-CC * Hardware was deracked and removed from PHX2 data centre 2020-04-14 * Hardware arrived at data-centre 2020-04-20 * Hardware was rack/stacked 2020-??-?? * Hardware was reinstalled 2020-??-?? | Serial No | Model | Mac Address | Current Hostname | New Hostname | | --------- | ----- | ----------- | ---------------- | ------------ | | ??? | juniper | ??? | | | | ??? | juniper | ??? | | | | ??? | OpenGear | ??? | rack 156 open | opengear02.rdu-cc. | | ??? | Mustang | ??? | ??? | ??? | | ??? | Mustang | ??? | ??? | ??? | | ??? | Power8 | ??? | ??? | ??? | | ??? | Dell R6550 | ??? | copr-vmhost01 | copr-vmhost01 | | ??? | Dell R6550 | ??? | copr-vmhost02 | copr-vmhost02 | | ??? | Dell R6550 | ??? | copr-vmhost03 | copr-vmhost03 | | ??? | Dell R6550 | ??? | copr-vmhost04 | copr-vmhost04 | | ??? | Len Ampr | ??? | cloud-a64 | cloud-a64 | | ??? | Dell R7550 | ??? | retrace03 | ???? | | ??? | Dell R630 | ??? | ??? | cloudvmhost-x86_64-01 | | ??? | Dell R630 | ??? | ??? | cloudvmhost-x86_64-02 | | ??? | Dell FX | ??? | lots-o-stuff | lots-o-stuff | | ??? | juniper | ??? | | | | ??? | juniper | ??? | | | | ??? | 10g switch | ??? | | | | ??? | Power8 | ??? | ??? | ??? | | ??? | Cavium 1 | ??? | ??? | ??? | | ??? | Cavium 2 | ??? | ??? | ??? | | ??? | Dell FX | ??? | lots-o-stuff | lots-o-stuff | | ??? | Storinator | ??? | storinator01 | storinator01 | ### April 13 MVF to IAD * Hardware was deracked and removed from PHX2 data centre 2020-04-14 * Hardware arrived at data-centre 2020-04-20 * Hardware was rack/stacked 2020-??-?? * Hardware was reinstalled 2020-??-?? | Serial No | Model | Mac Address | Current Hostname | New Hostname | | --------- | ----- | ----------- | ---------------- | ------------ | | ??? | OpenGear | ??? | rack 146 open | opengear01.rdu-cc. | | ??? | Dell R630 | B8:2A:72:FC:ED:22 | fed-cloud12 | vmh-x64-09.iad | | ??? | Dell R630 | B8:2A:72:FC:F2:2E | fed-cloud13 | vmh-x64-10.iad | | ??? | Len Ampr | E8:6A:64:39:18:99 | vhm-a-22 | vmh-a64-01.iad | | ??? | Len Ampr | E8:6A:64:39:18:85 | vhm-a-21 | vmh-a64-02.iad | | ??? | Dell R430 | ??? | sign02 | sign02 | | ??? | Dell R430 | ??? | bkernel04 | bkernel04 | | ??? | Mustang | ??? | ??? | ??? | | ??? | Mustang | ??? | ??? | ??? | | ??? | Power9 | ??? | ??? | ??? | | ??? | Power8 | ??? | ??? | ??? | | ??? | Len Ampr | E8:6A:64:97:6B:49 | oqa-a-02 | oqa-a-01.iad | | ??? | Dell R640 | ??? | ??? | vhm-qa-01.qa | | ??? | Dell R640 | ??? | ??? | vhm-qa-02.qa | | ??? | Dell R640 | ??? | ??? | oqa-x86-01.qa | | ??? | Dell R630 | ??? | fed-cloud-15 | oqa-x86-02.qa | | ??? | Power 9 | ??? | ??? | oqa-p64 | | ??? | Dell R630 | ??? | virthost05 | bvhm-02 |