# Quay Operator Overview for Pulp - Meeting notes
Very many operators on the hub implemented functionality because they didn't know that kube now does it.
Examples:
1. Manage their own pods, don't use deployments.
2. Feel they need more granual control over pods layout, when they could have used features like ordering and quorum
How to handle optional components:
Kind: A string
Managed: True or false
Didn't want to expose config options at the CR level.
If you want any customization, you have to make it unmanaged and deploy it yourself. (This was done to ease development & maintenance.)
Yanis: FSI customers frequently have to use a 3rd party external redis / postgres cluster / server / service.
"You have to have a really good reason to add a field to the CR."
On OLM, people would copy entire fields into the kube (something), people would duplicate entire kube objects onto the CRD. Things like status monitors. This was too hard to manage, especially during migrations. (upgrades)
Quay will see how this approach works out. Feedback will be coming in.
They have a schema? It's how users configure Quay at the application level, not the operator/infrastructure level. They have a config editor, it edits secrets. There is a service on the operator that validates the secret, switches the configregistry field on quay.
Config.yml is the entry point for the customer.
They generate a config.yml for you with all the correct values.
Marked in the UI as managed externally (operator managed), you can you mark it unamanged to mark it yourself.
All the logic in the config.yml in there rather than the CR. Alec pushed against putting it in the CR, because they change fields heavily & frequently whenever they do upgrades, also because of secret values. You have to mount it into the pods anyway.
The bundle is a secret, and within the bundle is the config.yml.
Certs can be in it too.
Quay uses object storage. It uses any S3-compatible API.
It has for testing & development local storage.
Joey insissted "no" to local storage, and you can't scale past a single node.
OpenShift comes with RHOCS, bundled noobaa.
They require RHOCS.
Only object storage is scalable and performant.
Operator creates an object bucket claim, it uses whatever the cluster provides.
They considered deploying minio, but they cannot support minio. Everything shipped must be supported.
They can use RHOCS in a manual fashion, with a couple limited steps.
Many customers will use external cloud storage.
They had to allow RHOCS PM to allow customers to use it in production for Quay without buying RHOCS.
Nothing stopping customers from using minio or similar.
Quay app pods are the only things that talk to object storage.
Quay is not tied to the OCS operator, because the OCS operator is slow to deploy. Not depending on it.
Quay does storage proxying, so all client pulls go through quay, not directly to the object storage (noobaa.) Alec tried this, it can work, but the certs are different. Client would have to have noobaa cert.
Object storage proxying is not slow because it goes through nginx, not through python.
But they could turn off object storage proxying. It's a config option. But they do not support it.
They do feature detection for routes, and object bucket API.
A route is a managed component.
A route is used for load balancer, possibly in the future for an ingress.
"When develop your operator, don't feel you have to follow the herd if it will make your life harder."
https://access.redhat.com/documentation/en-us/red_hat_quay/3.4/html/deploy_red_hat_quay_on_openshift_with_the_quay_operator/index