# Use Cases
* Wanting to store and deliver large binary blobs globally (using pulp_file)
* Uploaded first to the "central" pulp
* The "geo" pulps are points of presence, they sync from the "central"
* Distribute Ansible roles and collections (using pulp_ansible)
* pulp_ansible does not have RBAC today
* Role based access control
* Want to guard content on the Artifact level
* recommended using obj-level permissions with the RBAC Content Guard
* Want to use LDAP for Authentication and Pulp's RBAC for Authorization
* brian's notes on LDAP configuration here: https://hackmd.io/ED9UpscNSRW86Le3xNzVeg
* Versioning of repos is a benefit, want to expose snapshots
# Deployment
* Running standalone podman on each host because setting up k8s on each geo is a lot, their k8s is in one geo
* One image for each service
* They build images locally
* They want to have only specific plugins included
* They want to have more control over when upgrades happen
* The recent removal of ansible from the "build" steps made it easier for them to build locally
* configuring the "geo" pulps is a management challenge
# Squeezer Use
* Using squeezer as Ansible is used all over their organization
* Used to setup and maintain the "geo" pulps
* squeezer's functionality is noticably behind the CLI
* pain point: the uploading of large files with squeezer is happening serially, medium-sized artifacts are taking 15+ minutes
* [AI] we need a link with the specific issue so we can look at it
# Things Brian shared
* pulpcon Nov 7 - 12th. Schedule is not available yet, but talks are [here](https://discourse.pulpproject.org/t/pulpcon-2022-call-for-proposals/590)
* I'd specifically like to see if you can join the "operating Pulp" group, which I hope to outline some in the "Operating Pulp in Production" session at pulpcon
* Brian's notes on LDAP configuration: https://hackmd.io/ED9UpscNSRW86Le3xNzVeg
* Assign the "download_rbaccontentguard" permission to individual pulp_file Artifacts and use the RBAC Content Guard
* Talk with @gerrod on Matrix for more info on this, I asked him this question on #pulp-dev to confirm my understanding
* Idea: Use on_demand sync to make content available "immediately" (as fast as the metadata can be pulled) and then queue a "immediate" sync just after and a "geo" Pulp will download the artifacts behind the scenes even though clients could trigger them to download while the "immediate" sync is running. This is kind of the best of both worlds as it's as fast as on-demand, but it still proactively delivers all content so users don't have to wait any longer than they have to unless they are pulling "minutes old" files
* [ipanova] is this like `background` sync policy we had in pulp2?