owned this note
owned this note
Published
Linked with GitHub
This is a collection of things for a retrospective on our 2025 iad2 -> rdu3 datacenter move
Please feel free to add new items you think of or add +1s to things you really agree with.
Please add your name to the end so we know who added it and can ask for more details if needed.
Things that went well
=====================
- contibutors / users were quite patient with us overall, and waited for things to come back online [kevin]
- We met the target and got the job done [kevin]
- The new hardware is nice and fast (several comments from people already about things being lots faster) [kevin]
- RHIT / storage / networking were all responsive and really helpfull [kevin]
- The storage switcharoo was much faster than expected [zlopez]
- Appreciated that most of the IPs were same just 10.3.* changed to 10.16.*, it was easy to update host vars [zlopez]
Things that went not so well
============================
- DNS resolving was a problem, where we got prod rdu3 instead of stg.rdu3
- ntap was firewalled, or not routed, for a bit
- **need** to make sure we co-ordinate DB migration with service bring up
- Gets a bit confusing when the playbook is changing often, Eg. default ipa servers changing from ip.stg.ida2 to ipa.stg.rdu3 (maybe less of a problem with prod)
- Couple of places where data is duplicated (likely copy and paste error) in ansible, and there's no warnings and the second version wins (so you change the first and nothing changes)
- Hard coded iad2 things in playbook tasks (hopefully all found with staging rollout)
- Also there was a lot of hardcoded IPs for 10.3 in things like NFTables, postfix, etc. we should template these out properly [greg]
- Weird issue with `db01.stg.rdu3` prompt showing PROD-RDU3 -- hostname wasn't in the staging group?
- Same happened with `pkgs01.stg.rdu3`, it was missing in staging group and thus using production variables - zlopez
- Everyone try to rest on Sunday
- we should drop some old databases we still have (like pdc, which is MASSIVE!) before the move (I will do that in the next few days) - [kevin]
- pagure.io was being hammered by ai bots and making doing commits anoying - [kevin]
- kevin forgot to drop the pdc db so it made moving db01 not so great - [kevin]
- gunicorn doesn't return redirects, instead it just returns empty output so testing http://localhost:8000/archives didn't work, but testing http://localhost:8000/archives/ did
- db03 did not want to rsync. Turns out I had to use --inplace or rsync would copy a ~300GB binary file and run out of disk space.
- networking outage on wed was a big drag and stopped us from getting much done
- mtu issue was anoying to track down and caused the wed outage [kevin]
- IT firewall issue from proxies -> koji01/02 took a very long time to debug/figure out. [kevin]
- We found out our sssd filter wasn't working, so local users vs IPA users was a cause of problems/confusion. Some playbooks didn't actually create the user right. [kevin]
- pkgs had a number of issues because we hadn't reinstalled it in many years. Perhaps reinstalling the stg one from time to time would be good? [kevin]
- we switched kojipkgs to rdu3 pretty early, but turns out the playbook didn't mount the ostree nfs mounts, so ostree users got errors for a few days. ;( [kevin])
- Plenty of RDU3 machines were missing in staging group, so they got deployed with production vars and didn't work as expected [zlopez]
- We forgot to migrate search index for mailman, which ended up blocking shutdown of IAD2 [zlopez]
- There was a lot of IAD2 leftovers and we updated ansible with RDU3, but didn't run the playbooks, which caused issues later [zlopez]
- idrange and dnarange in IPA is not replicated [zlopez]
- ostfix wouldn't deploy properly due to missing mapfiles, which don't seem to be managed. I have notes, can PR at some point [greg]
- Was it a good idea to deploy a new version of RabbitMQ on a new OS during the move? If we'd had time to test it, sure, but given the timeframe we were working with, that seemed to cause a *lot* stress [greg]
- Old clients are not automatically re-enrolled to new IPA hosts, that needs to be done manually by running ipa-client-install --uninstall and ipa-client-install again [zlopez]
- Mailman is not cleaning old processed tables (pendedkeyvalue and bounceevent), which caused OOM issues in new deployment [zlopez]
- Some fixes were missing in ansible (for example src.fp.o cookie fix) [zlopez]
Things we could do to make the next one better
==============================================
- an extra week or two with network to the machines would have been very nice. As it is we were just installing, right up to the deadline. If we had a few more weeks we could have deployed more things and tested more. Perhaps would have found the mtu issue. [kevin]
- Don't forget to move also IPA CA renewal and CRL server to new machines (already done for both staging and production, but let's keep this in mind for next migration). See https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/migrating_to_identity_management_on_rhel_9/index#assigning-the-ca-renewal-server-role-to-the-rhel-9-idm-server_assembly_migrating-your-idm-environment-from-rhel-8-servers-to-rhel-9-servers
- openshift-image-registry is an operator, disabling pods will just roll them back. Do `oc edit configs.imageregistry.operator.openshift.io` and change `managementState:` to `Removed` instead
- db for release-monitoring is hardcoded in vars.yml in ansible-private
- (Greg) openqa.stg was broken - the openqa-lab hosts are still in iad2, and port 80 is not open RDU3->IAD2 for the reverseproxy
- AdamW was keen to restore access rather than wait til next week, so I restored the stg.iad2 proxies for *just openqa*
- `cd /etc/httpd/conf.d/ ; mkdir disabled ; mv * disabled/ ; mv disabled/openqa* . ; systemctl restart httpd`
- DNS updated (I couldn't do this, I think I need to be in sysadmin-dns. Asked DKirwan to help)
- the RDU3 proxies don't have the reverseproxy info for openqaa - they just redirect 421. I assume this will be fixed later by "something"
- this will need to be reverted once the lab has moved
- (Greg) there's a lot of `hosts: db01.iad2.fedoraproject.org:db01.stg.iad2.fedoraproject.org` in the openshift playbooks
- Doesn't need to change now as the DBs exist, but we should clean that up
- Don't forget mailman index next time
- We need to backup the dnarange (use `ipa-replica-manage dnarange-show` and assign the old ranges to the new servers when the old ones are removed (maybe adding that to ansible) (see [the infra ticket](https://pagure.io/fedora-infrastructure/issue/12641)) [IdM documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/adjusting-id-ranges-manually_managing-users-groups-hosts) [Recover ID ranges doc](https://www.freeipa.org/page/V3/Recover_DNA_Ranges).\
**Actually**, we should ask the IPA folks for the proper way to migrate a IPA cluster, without messing up the DNA ranges.
- TFTP/Grub config on noc01 isn't managed by Ansible, probably should be [greg]
- Would be nice if we could get the openshift-apps playbooks to actually say what they're applying, right now it's 20x "did an API thing" [greg]