# Noobaa MCG vs Ceph RGW for OpenShift Image Registry
Considerations:
1. Service (Internal) -vs- Route (External)
https://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc
2. Noobaa's MCG (Multi Cloud Gateway) -vs- Ceph's RGW (RADOS Gateway)
http://ocs-storagecluster-cephobjectstore-openshift-storage.apps.example.com
:::danger
Don't use this method because the nodes/kubelets can't resolve the .svc address 😔
:::
I'm going to document **RGW via Service** because it is a more direct network path and simpler architecture than [Noobaa via Route](https://hackmd.io/@johnsimcall/H1dV_f32C). Noobaa may be valuable if you intend to mirror, replicate or encrypt the object bucket in the future.
Documentation available at https://docs.openshift.com/container-platform/latest/registry/configuring_registry_storage/configuring-registry-storage-rhodf.html#registry-configuring-registry-storage-rhodf-cephrgw_configuring-registry-storage-rhodf
:::warning
It is NOT recommended to use CephFS
:::
## Ceph RGW via Service
Please note, the name of the `secret` that holds the bucket access/secret keys is hard coded as: `image-registry-private-configuration-user`
https://github.com/openshift/cluster-image-registry-operator?tab=readme-ov-file#image-registry-private-configuration-user-secret
```yaml
---
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: imageregistry
namespace: openshift-image-registry
spec:
generateBucketName: imageregistry
storageClassName: ocs-storagecluster-ceph-rgw
```
```bash
CLAIM_NAME=$(oc get objectbucketclaim imageregistry -n openshift-image-registry -o jsonpath='{.spec.objectBucketName}')
BUCKET_NAME=$(oc get objectbucket $CLAIM_NAME -n openshift-image-registry -o=jsonpath='{.spec.endpoint.bucketName}')
ACCESS_KEY=$(oc extract secret/imageregistry -n openshift-image-registry --keys=AWS_ACCESS_KEY_ID --to=-)
SECRET_KEY=$(oc extract secret/imageregistry -n openshift-image-registry --keys=AWS_SECRET_ACCESS_KEY --to=-)
ENDPOINT=$(oc get objectbucket $CLAIM_NAME -n openshift-image-registry -o=jsonpath='{.spec.endpoint.bucketHost}')
echo "CLAIM_NAME: ${CLAIM_NAME}
BUCKET_NAME: ${BUCKET_NAME}
ACCESS_KEY: ${ACCESS_KEY}
SECRET_KEY: ${SECRET_KEY}
ENDPOINT: ${ENDPOINT}"
# The secret name must be "image-registry-private-configuration-user"
oc create secret generic image-registry-private-configuration-user \
--namespace openshift-image-registry \
--from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${ACCESS_KEY} \
--from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${SECRET_KEY}
oc extract configmap/openshift-service-ca.crt --keys=service-ca.crt -n openshift-ingress --confirm
oc create configmap imageregistry-trusted-ca --from-file=ca-bundle.crt=./service-ca.crt -n openshift-config
oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${BUCKET_NAME}\"',"region":"us-east-1","regionEndpoint":'\"https://${ENDPOINT}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"imageregistry-trusted-ca"}}}}}' --type=merge
```
# Appendix
Kincl and I observed that nodes of the `salsa` cluster (which pull the images) couldn't resolve the DNS name of the service.
This doesn't make sense to me because the Image Registry is found
*image-registry.openshift-image-registry.svc*
but not the backend S3 bucket???
*rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc*
The error from kubelet (`oc describe pod`) is:
```
Failed to pull image "image-registry.openshift-image-registry.svc:5000/jcall/simple-php-git@sha256:0837545fb6e0c4316ff60af97db3f69ca194ded9bdc0bd43f92c980a42c0b71d": parsing image configuration: Get "https://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc/imageregistry-c7cf25b3-5749-4753-b613-7de8abac1188/docker/registry/v2/blobs/sha256/59/59e461f8bd7afed7f65d883705ce52dae0b6615fffd6423ba76a3f349b41aa1b/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=W4MTZ00AJWO1FBYP4EUK%2F20240909%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240909T070553Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=293958e35b617cabcb21875659d42bd6b39dd746f3e57f01edd4ad8efc5de607": dial tcp: lookup rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc on 10.15.168.11:53: no such host
```
So we patched the ImageRegistry to use HTTP and the IP address instead of the FQDN of the Service.
```
IP_ADDRESS=$(oc get service rook-ceph-rgw-ocs-storagecluster-cephobjectstore -n openshift-storage -o jsonpath='{.spec.clusterIP}')
oc patch config.image/cluster -p '{"spec":{"storage":{"s3":{"regionEndpoint":'\"http://${IP_ADDRESS}\"'}}}}' --type=merge
s3:
bucket: imageregistry-886604e6-03b7-485b-b484-921a13f4137a
region: us-east-1
regionEndpoint: http://172.30.217.113
virtualHostedStyle: false
```