--- title: Packaging and shipping mechanism of operator packages for OLM v1 authors: - "@anik120" reviewers: - approvers: - creation-date: 2023-30-01 last-updated: 2023-01-03 tags: operator catalogs --- # Packaging and shipping mechanism of operator packages for OLM v1 ## Motivation Operator authors today make `operator bundles` for each release of their operator, which consists of all the deploy manifests, along with an OLM v0 CR, [the ClusterServiceVersion (CSV)](https://github.com/operator-framework/api/blob/master/crds/operators.coreos.com_clusterserviceversions.yaml), packaged into an OCI image. These images are referred to as the `Bundle Images` in the OLM v0 eco-system. A plethora of sub-components then had to be spun up to support `Bundle Images`. For example 1. a cli tool `opm` had to be written to help catalog authors add operator releases to their catalog, since adding a release to a catalog included: a) pulling the `Bundle Image` to a local container runtime, using a third-party cli too like docker/podman, etc b) running and copying the content of the image (from a pre-determined hardcoded location) into the local filesystem c) adding the CSV+other deploy manifests to the catalog (a SQLite DB), which is then itself copied into an OCI image (referred to as the index image) 2. On clusters, pods that pull the Index images and Bundle images had to be spun up to serve the content of a catalog to components. These have been sources of the high maintenance cost, along with sources of high resource consumption on and off clusters in production. A more detailed discussion on the high maintenance cost and resource consumption can be found [here](https://hackmd.io/-Rrvhgb3Q7m9HYZW2gxQtQ?view#Motivation). There are however other ways than OCI images for packaging and shipping manifests, that do not involve multiple steps for operator authors, catalog authors, and on cluster components that consume them. [OCI Artifacts](https://github.com/opencontainers/artifacts) for example is an initiative that stemmed from other projects realizing the benefits of the distribution mechanism that image registries popularized, especially due to its compatibility with the k8s eco-system. (See the reference section at the end of this doc for an article that discusses OCI images vs OCI artifacts) Despite the OCI Artifacts [distribution mechanism](https://github.com/opencontainers/distribution-spec/) being more compatible with the use case of building and shipping operator packages for OLM, the OCI Image format for `Bundle` and `Catalog` images were adopted in OLM v0 due to a hard requirement from Openshift that stated that all images imported into the cluster must run through [CRI-O](https://github.com/cri-o/cri-o), the container runtime interface, due to security reasons (eg CRI-O running with an [ICSP](https://docs.openshift.com/container-platform/4.10/openshift_images/image-configuration.html) to secure the cluster from images from unwanted image registries). With the OCI Artifacts landscape maturing fairly since the `Bundle Image` format was deviced, both in terms of adoption (eg [Docker Hub announcing OCI artifacts support in Oct 2022](https://www.docker.com/blog/announcing-docker-hub-oci-artifacts-support/), [Azure container registry announcing support for OCI artifacts in January 2023](https://learn.microsoft.com/en-us/azure/container-registry/container-registry-oci-artifacts), and other registries a full list of which can be found in the reference section below), and security (eg [FluxCD announcing support for proving authenticity of OCI artifacts](https://www.cncf.io/blog/2022/10/31/prove-the-authenticity-of-oci-artifacts/)), this doc aims to propose a new packaging and shipping mechanism for operators as OCI Artifacts that will enable significantly slimmer OLM v1 componenets, with the express intent of reassesing if the landscape is mature enough today to also assuage security concerns for enterprise grade k8s distribution, if all of the OLM v1 artifacts are to be OCI artifacts. ## Requirements ### User stories * As an operator author, I want my operator's on-cluster lifecycle managed by OLM v1, but with the minimal repackaging of deploy my manifests for OLM. * As an operator author, I want to include my operators in various catalogs being shipped with different k8s distributions with the minimal repackaging of my deploy manifests. * As a catalog author, I want to represent my catalog as a single file(yaml/json) with a list of content addressable operator packages hosted in remote registries. * As a cluster admin, I want to make sure adding a catalog of operators to my cluster does not compromise cluster security. ### Goals * Provide a way for operator authors to have their software's lifecycle managed by OLM v1 on clusters in a way that does not require operator authors to - Learn the nitty-gritty of the ways various components of OLM v1 interact with each other to provide features like automatic upgrades of installed operators to newly released versions, on-cluster notifications about newly released operator versions etc. - Maintain specialized artifacts, that duplicate information already stored in other ways (eg in deploy manifests), besides the deploy manifests that are required to install their operator on a k8s cluster. * Provide a way for catalog authors to represent their catalogs minimally, with a single file (yaml/json), that contains just the remote addresses of the packages they want to include in their catalog. * Provide a way for cluster admins to ensure cluster security is not compromised by any OLM artifacts. ### Non Goals This doc does not plan on discussing the implementation details of how the OCI artifacts will be consumed on cluster. That will be addressed in this doc: [On cluster package delivery mechanism for OLM v1](https://hackmd.io/@of-olm/B1cMe1kHj) ## Proposal ### Building OLM v1 catalogs using OCI artifacts #### Operator Bundles for OLM V1 Consider the following experiment done with the [`oras`](https://oras.land) cli tool: ```bash $ cd camel-k/ $ ls camel-k-operator.v0.3.3 camel-k-operator.v0.3.4 camel-k-operator.v1.0.0 camel-k-operator.v1.2.0 package.yaml $ ls camel-k-operator.v0.3.3 builds.camel.apache.org.crd.yaml integrationcontexts.camel.apache.org.crd.yaml service.yaml camelcatalogs.camel.apache.org.crd.yaml integrationplatforms.camel.apache.org.crd.yaml service_account.yaml cluster_role.yaml integrations.camel.apache.org.crd.yaml deployment.yaml olm.bundle.metadata.yaml $ ls camel-k-operator.v0.3.4 builds.camel.apache.org.crd.yaml integrationplatforms.camel.apache.org.crd.yaml service.yaml camelcatalogs.camel.apache.org.crd.yaml integrations.camel.apache.org.crd.yaml service_account.yaml deployment.yaml olm.bundle.metadata.yaml integrationcontexts.camel.apache.org.crd.yaml roles.yaml $ ls camel-k-operator.v1.0.0 builds.camel.apache.org.crd.yaml deployment.yaml integrations.camel.apache.org.crd.yaml service.yaml camelcatalogs.camel.apache.org.crd.yaml integrationkits.camel.apache.org.crd.yaml roles.yaml service_account.yaml $ ls camel-k-operator.v1.2.0 builds.camel.apache.org.crd.yaml integrationplatforms.camel.apache.org.crd.yaml olm.bundle.metadata.yaml camelcatalogs.camel.apache.org.crd.yaml integrations.camel.apache.org.crd.yaml service.yaml deployment.yaml kameletbindings.camel.apache.org.crd.yaml service_account.yaml integrationkits.camel.apache.org.crd.yaml kamelets.camel.apache.org.crd.yaml ``` In the camel-k directory, there are deploy manifests for different releases of the camel-k operator. They are only in the same directory for the purposes of this experiment, but these deploy manifests are merely the deploy manifests that the code repository already maintains (eg [v1.10.2 release of camel-k](https://github.com/apache/camel-k/tree/v1.10.2/config)). The only additional, optional file is the `olm.bundle.metadata.yaml` file, which contains release-specific information for the OLM v1 dependency resolver (eg runtime constraints), and for other bundles, could potentially be the place where dependencies on other packages are specified too. These deploy manifests were [uploaded as OCI artifacts to the repository `docker.io.anik120/camel-k`](https://hub.docker.com/r/anik120/camel-k/tags) ```bash $ oras push docker.io/anik120/camel-k:v0.3.3 --artifact-type olm.bundle ./camel-k-operator.v0.3.3:olm.operator.bundle+tar Uploading b0fdf2b375a9 camel-k-operator.v0.3.3 Uploaded b0fdf2b375a9 camel-k-operator.v0.3.3 Pushed docker.io/anik120/camel-k:v0.3.3 Digest: sha256:85fc05f87a32c7609184b56dec5a10a7002647e47a7948f906924cb3243c04fd $ oras push docker.io/anik120/camel-k:v0.3.4 --artifact-type olm.bundle ./camel-k-operator.v0.3.4:olm.operator.bundle+tar Uploading a6a67f645aa6 camel-k-operator.v0.3.4 Uploaded a6a67f645aa6 camel-k-operator.v0.3.4 Pushed docker.io/anik120/camel-k:v0.3.4 Digest: sha256:5495c4ad896791cf091fa6477e798cd15b7856810ed254287b16118c6b304f4c $ oras push docker.io/anik120/camel-k:v1.0.0 --artifact-type olm.bundle ./camel-k-operator.v1.0.0:olm.operator.bundle+tar Uploading 35fa1f989d3a camel-k-operator.v1.0.0 Uploaded 35fa1f989d3a camel-k-operator.v1.0.0 Pushed docker.io/anik120/camel-k:v1.0.0 Digest: sha256:81894556954481c24b42f68f818293b389a64233b617f4750b8e803a1ceeb1f3 $ oras push docker.io/anik120/camel-k:v1.2.0 --artifact-type olm.bundle ./camel-k-operator.v1.2.0:olm.operator.bundle+tar Uploading 9b660f4d02ae camel-k-operator.v1.2.0 Uploaded 9b660f4d02ae camel-k-operator.v1.2.0 Pushed docker.io/anik120/camel-k:v1.2.0 Digest: sha256:d301ec5c75b426549cb9884763f958ebaaacf441f9b1458b146b8cbf79c702e8 ``` To see how it differs from operator bundle manifests packaged as OCI images (i.e copied manifests into a container using a Dockerfile), let us pull a version: ```bash $ mkdir pulled-content $ cd pulled-content $ oras pull docker.io/anik120/camel-k:v0.3.3@sha256:85fc05f87a32c7609184b56dec5a10a7002647e47a7948f906924cb3243c04fd Downloading b0fdf2b375a9 camel-k-operator.v0.3.3 Downloaded b0fdf2b375a9 camel-k-operator.v0.3.3 Pulled docker.io/anik120/camel-k:v0.3.3@sha256:85fc05f87a32c7609184b56dec5a10a7002647e47a7948f906924cb3243c04fd Digest: sha256:85fc05f87a32c7609184b56dec5a10a7002647e47a7948f906924cb3243c04fd $ ls camel-k-operator.v0.3.3 $ ls camel-k-operator.v0.3.3 builds.camel.apache.org.crd.yaml integrationcontexts.camel.apache.org.crd.yaml service.yaml camelcatalogs.camel.apache.org.crd.yaml integrationplatforms.camel.apache.org.crd.yaml service_account.yaml cluster_role.yaml integrations.camel.apache.org.crd.yaml deployment.yaml olm.bundle.metadata.yaml ``` This is much simpler than the current workflow of trying to use the content of an OCI bundle image. Today's OCI bundle images have to be pulled and then unpacked (which involves running the container and then copying the content from a specific hardcoded location in the container filesystem, onto the local filesystem) using one of docker/podman cli tool. All of the components of OLM that needed the content of these images (dependency resolver/opm etc) had to implement this unpacking that involved significant hurdles due to the multiple steps required to just get the content out of the image. Therefore, using OCI artifacts as operator bundle/package shipping mechanism for OLM v1 would not only mean reducing the overall cost of making these manifests shippable, but also the cost of consumption by various components (think for example no more unpacking `Job`s (yet another construct) needed to make these content available on clusters). ### Operator Packages for OLM v1 The CSV construct, besides being repetitive in carrying pieces of information that were already present in the deploy manifests, also encouraged the carrying of package-level metadata, in each bundle included in a package. Consider the example of a present-day Prometheus operator bundle that can be found [here](https://github.com/operator-framework/rukpak/tree/main/testdata/bundles/registry/valid). Information like the [default channel of the package](https://github.com/operator-framework/rukpak/blob/main/testdata/bundles/registry/valid/metadata/annotations.yaml#L2), [example CRs](https://github.com/operator-framework/rukpak/blob/main/testdata/bundles/registry/valid/manifests/prometheusoperator.0.47.0.clusterserviceversion.yaml#L5-L119), [icon displayed in the Openshift console](https://github.com/operator-framework/rukpak/blob/main/testdata/bundles/registry/valid/manifests/prometheusoperator.0.47.0.clusterserviceversion.yaml#L305-L307) (and many, many other fields especially in the CSV), are all package level metadata that are duplicated in each and every bundle, over and over again. With OCI Artifacts enabling operator bundle manifests to be their own artifact images that consist of only the deploy manifests, operator packages for OLM consisting of all operator bundles (along with package level construct like channel information, etc) can be represented in a single manifest, that itself is then uploaded as an OCI artifact. For example for the different versions of camel-k bundles uploaded before, a package can be constructed in the following way: 1. Create a package.yaml manifest that consists of all package-level metadata ```yaml= apiVersion: plugin.olm.io/v1alpha1 Kind: Package metadata: name: camel-k spec: icon: - base64data: PHN2ZyB2aWV3Qm94PSIwIDAgMTMwLjIxIDEzMC4wMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48ZGVmcz48bGluZWFyR3JhZGllbnQgaWQ9ImEiIHgxPSIzMzMuNDgiIHgyPSI0NzciIHkxPSI3MDIuNiIgeTI9IjU2My43MyIgZ3JhZGllbnRUcmFuc2Zvcm09InRyYW5zbGF0ZSg5NC4wMzggMjc2LjA2KSBzY2FsZSguOTkyMDYpIiBncmFkaWVudFVuaXRzPSJ1c2VyU3BhY2VPblVzZSI+PHN0b3Agc3RvcC1jb2xvcj0iI0Y2OTkyMyIgb2Zmc2V0PSIwIi8+PHN0b3Agc3RvcC1jb2xvcj0iI0Y3OUEyMyIgb2Zmc2V0PSIuMTEiLz48c3RvcCBzdG9wLWNvbG9yPSIjRTk3ODI2IiBvZmZzZXQ9Ii45NDUiLz48L2xpbmVhckdyYWRpZW50PjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeDE9IjMzMy40OCIgeDI9IjQ3NyIgeTE9IjcwMi42IiB5Mj0iNTYzLjczIiBncmFkaWVudFRyYW5zZm9ybT0idHJhbnNsYXRlKDk0LjAzOCAyNzYuMDYpIHNjYWxlKC45OTIwNikiIGdyYWRpZW50VW5pdHM9InVzZXJTcGFjZU9uVXNlIj48c3RvcCBzdG9wLWNvbG9yPSIjRjY5OTIzIiBvZmZzZXQ9IjAiLz48c3RvcCBzdG9wLWNvbG9yPSIjRjc5QTIzIiBvZmZzZXQ9Ii4wOCIvPjxzdG9wIHN0b3AtY29sb3I9IiNFOTc4MjYiIG9mZnNldD0iLjQxOSIvPjwvbGluZWFyR3JhZGllbnQ+PGxpbmVhckdyYWRpZW50IGlkPSJjIiB4MT0iNjMzLjU1IiB4Mj0iNTY2LjQ3IiB5MT0iODE0LjYiIHkyPSI5MDkuMTIiIGdyYWRpZW50VHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTg1LjQyMSA1Ni4yMzYpIiBncmFkaWVudFVuaXRzPSJ1c2VyU3BhY2VPblVzZSI+PHN0b3Agc3RvcC1jb2xvcj0iI2Y2ZTQyMyIgb2Zmc2V0PSIwIi8+PHN0b3Agc3RvcC1jb2xvcj0iI0Y3OUEyMyIgb2Zmc2V0PSIuNDEyIi8+PHN0b3Agc3RvcC1jb2xvcj0iI0U5NzgyNiIgb2Zmc2V0PSIuNzMzIi8+PC9saW5lYXJHcmFkaWVudD48L2RlZnM+PGcgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTQzNy44OSAtODM1LjI5KSI+PGNpcmNsZSBjeD0iNTAzLjEiIGN5PSI5MDAuMjkiIHI9IjYyLjUyIiBmaWxsPSJ1cmwoI2EpIiBzdHJva2U9InVybCgjYikiIHN0cm9rZS1saW5lam9pbj0icm91bmQiIHN0cm9rZS13aWR0aD0iNC45NiIvPjxwYXRoIGQ9Ik00ODcuODkgODczLjY0YTg5LjUzIDg5LjUzIDAgMCAwLTIuNjg4LjAzMWMtMS4wNDMuMDMxLTIuNDQ1LjM2Mi00LjA2Mi45MDYgMjcuMzA5IDIwLjczNyAzNy4xMjcgNTguMTQ2IDIwLjI1IDkwLjY1Ni41NzMuMDE1IDEuMTQyLjA2MyAxLjcxOS4wNjMgMzAuODQ0IDAgNTYuNjItMjEuNDkzIDYzLjI4LTUwLjMxMi0xOS41NzItMjIuOTQzLTQ2LjExNy00MS4yOTQtNzguNS00MS4zNDR6IiBmaWxsPSJ1cmwoI2MpIiBvcGFjaXR5PSIuNzUiLz48cGF0aCBkPSJNNDgxLjE0IDg3NC41OGMtOS4wNjggMy4wNTItMjYuMzY4IDEzLjgwMi00MyAyOC4xNTYgMS4yNjMgMzQuMTk1IDI4Ljk2MSA2MS42MDcgNjMuMjUgNjIuNSAxNi44NzctMzIuNTEgNy4wNi02OS45MTktMjAuMjUtOTAuNjU2eiIgZmlsbD0iIzI4MTcwYiIgb3BhY2l0eT0iLjc1Ii8+PHBhdGggZD0iTTUwNC44ODkgODYyLjU0NmMtLjQ3Mi0uMDMyLS45MzIuMDI4LTEuMzc1LjI1LTUuNiAyLjgwMSAwIDE0IDAgMTQtMTYuODA3IDE0LjAwOS0xMy4yMzYgMzcuOTM4LTMyLjg0NCAzNy45MzgtMTAuNjg5IDAtMjEuMzIyLTEyLjI5My0zMi41MzEtMTkuODEyLS4xNDQgMS43NzMtLjI1IDMuNTY0LS4yNSA1LjM3NSAwIDI0LjUxNSAxMy41MSA0NS44NjMgMzMuNDY5IDU3LjA2MyA1LjU4My0uNzAzIDExLjE1OC0yLjExNCAxNS4zNDQtNC45MDYgMjEuOTkyLTE0LjY2MiAyNy40NTItNDIuNTU3IDM2LjQzOC01Ni4wMzEgNS41OTYtOC40MDcgMzEuODI0LTcuNjc3IDMzLjU5NC0xMS4yMiAyLjgwNC01LjYwMS01LjYwMi0xNC04LjQwNi0xNGgtMjIuNDA2Yy0xLjU2NiAwLTQuMDI1LTIuNzgtNS41OTQtMi43OGgtOC40MDZzLTMuNzI1LTUuNjUtNy4wMzEtNS44NzV6IiBmaWxsPSIjZmZmIi8+PC9nPjwvc3ZnPg== mediatype: image/svg+xml displayName: Camel K Operator keywords: - apache - kamel - kubernetes - serverless - microservices links: - name: Camel K source code repository url: https://github.com/apache/camel-k maintainers: - email: users@camel.apache.org name: The Apache Software Foundation maturity: alpha provider: name: The Apache Software Foundation examples: "[{\n \"apiVersion\": \"camel.apache.org/v1alpha1\",\n \"kind\"\ : \"IntegrationPlatform\",\n \"metadata\": {\n \"name\": \"example\"\n \ \ },\n \"spec\": {\n \"build\": {\n \"buildStrategy\": \"pod\"\n \ \ },\n \"resources\": {\n \"contexts\": [\n \"jvm\"\n ]\n\ \ }\n }\n},\n{\n \"apiVersion\": \"camel.apache.org/v1alpha1\",\n \"kind\"\ : \"Integration\",\n \"metadata\": {\n \"name\": \"example\"\n },\n \"\ spec\": {\n \"source\": {\n \"content\": \"// Add example Java code\ \ to create Integration\",\n \"name\": \"Example.java\"\n }\n }\n},\n\ {\n \"apiVersion\": \"camel.apache.org/v1alpha1\",\n \"kind\": \"IntegrationContext\"\ ,\n \"metadata\": {\n \"name\": \"example\"\n }\n},\n{\n \"apiVersion\"\ : \"camel.apache.org/v1alpha1\",\n \"kind\": \"CamelCatalog\",\n \"metadata\"\ : {\n \"name\": \"example\"\n }\n},\n{\n \"apiVersion\": \"camel.apache.org/v1alpha1\"\ ,\n \"kind\": \"Build\",\n \"metadata\": {\n \"name\": \"example\"\n }\n\ }]" capabilities: Basic Install categories: Integration & Delivery certified: 'false' description: Apache Camel K (a.k.a. Kamel) is a lightweight integration framework built from Apache Camel that runs natively on Kubernetes and is specifically designed for serverless and microservice architectures. repository: https://github.com/apache/camel-k support: Camel defaultChannel: stable channels: - name: alpha entries: - name: camel-k-operator.v0.3.3 location: docker.io/anik120/camel-k:v0.3.3@sha256:85fc05f87a32c7609184b56dec5a10a7002647e47a7948f906924cb3243c04fd - name: camel-k-operator.v0.3.4@sha256:5495c4ad896791cf091fa6477e798cd15b7856810ed254287b16118c6b304f4c location: docker.io/camel-k:v0.3.3 replaces: camel-k-operator.v0.3.3 - name: stable entries: - name: camel-k-operator:v1.0.0 location: docker.io/anik120/camel-k:v1.0.0@sha256:81894556954481c24b42f68f818293b389a64233b617f4750b8e803a1ceeb1f3 - name: camel-k-operator:v1.2.0 location: docker.io/anik120/camel-k:v1.2.0@sha256:d301ec5c75b426549cb9884763f958ebaaacf441f9b1458b146b8cbf79c702e8 repalce: camel-k-operator:v1.0.0 ``` Note how this package.yaml is the same CR that's proposed [here](https://hackmd.io/-Rrvhgb3Q7m9HYZW2gxQtQ?view#Package-CRD). 2. Upload the package.yaml as OCI artifact to the camel-k image repository ``` bash $ oras push docker.io/anik120/camel-k:package --artifact-type olm.package package.yaml Uploading 32b06f58fcce package.yaml Uploaded 32b06f58fcce package.yaml Pushed docker.io/anik120/camel-k:package Digest: sha256:673d649b120c1418c787e9a78751db678356dd7e3e85a0650b322cfd209e0a4e ``` This construct also allows: 1. The bundles to be reusable. For example, camel-k.v0.0.3 can be included in the package metadata for both the camel-k community package and the camel-k enterprise package. 2. The net change required to make changes to the package (eg include a bundle in a different channel) to be contained to a very small surface area (as opposed to the current process where a change like channel switch for a bundle involves rebuilding the bundle, then re-adding the bundle to the catalog with constructs to help already installed bundles to pick on changes, i.e rebuilding the entire catalog for a small change to a single bundle). This also means the net changes that get pulled by the consumers every time the image registry is polled for new information is a fraction of, essentially the entire catalog that needs to get pulled today. #### Operator catalogs as OCI artifacts Finally, using the same tools, operator catalog authors can then represent their catalogs using just a single file, which is only a collection of addresses of the various package the author wants to include in the catalog. Eg ```yaml= schema: olm.Catalog packages: - docker.io/anik120/camel-k:package@sha256:673d649b120c1418c787e9a78751db678356dd7e3e85a0650b322cfd209e0a4e - docker.io/anik120/etcd:packagesha256:4makd8r9b120c1418c787e9a78751db678356dd7e3e85a0650b322cfd20hna9805a . . . ``` ```bash $ oras push docker.io/anik120/oci-catalog:latest --artifact-type olm.catalog catalog.yaml Uploading f9b59f42d693 catalog.yaml Uploaded f9b59f42d693 catalog.yaml Pushed docker.io/anik120/oci-catalog:latest Digest: sha256:ae112cd1e5361f6075cce661e7bce2887630e8a0484a6073be4c01221afdbb3f ``` ## Addressing security concerns of images Operator bundle manifests, package metadata yaml, and catalog metadata yaml can be pushed to image registries as OCI artifacts using any of the cli tools (see reference section) that specializes in pushing arbitrary manifests as OCI artifacts to image registries. However, since these tools do not require any container runtime to pull these manifests to the local filesystem, they essentially bypass container runtimes configured to adhere to security policies (eg "only allow images from domain quay.io to be pulled onto the cluster"), thereby potentially exposing the cluster to malicious content, if any of the OLM v1 components enable pulling of these images on clusters. The cli tool `opm` could help abstract the operational knowledge required for building these OCI artifacts images, along with building in encodings that ensure any of the images being used in the OLM v1 eco-system are complying with all security standards(eg by using libraries like [oras go](https://github.com/oras-project/oras-go)). For example: * All artifact images (operator bundle, package, or catalog) pushed with `opm` would have the appropriate `mediaType` associated with them, [that are registered with the IANA](https://github.com/opencontainers/artifacts/blob/main/artifact-authors.md#registering-unique-types-with-iana) eventually, such that any successive pulls on clusters by other components can reject images that do not have the expected `mediaType`s. * Artifact images built with `opm` would be signed with the help of a tool like [cosign](https://docs.sigstore.dev/cosign/overview/), and the public key needed for verification of these images before they're allowed to be pulled on clusters would be provided to the on cluster components. For example, a new [rukpak provisoner implementation](https://github.com/operator-framework/rukpak/pull/609) that enables `BundleDeployment`s from OCI Artifact images would have a way for users to pass in the public keys via secrets, that the provisioner would then use to verify the image with, before deploying the `Bundle`. Note that although it is possible to build these images with all of the tools available to build OCI artifacts (i.e any and all bugs in`opm`'s building, pushing and signing OCI artifacts features will not be critical blocking bugs for operator/catalog authors), `opm` would only be the central tool that helps assist in performing all of these functions related to operator/catalog artifacts, under one roof. Catalog authors can also share the responsibility of ensuring cluster security by [copying the artifact images](https://carvel.dev/imgpkg/docs/v0.34.0/commands/#copy) of all the operator packages/bundles they're including in their catalog to a trusted registry, and signing it themselves (i.e customers would then be handed the public key for the images signed by the catalog authors to create a `Secret` containing the public key on clusters). Any on-cluster component that uses these artifacts would however be in the critical path of ensuring security on clusters(i.e any bugs on the rukpak artifact provisioner related to the verification of images would have to be considered a critical bug). In conclusion though, the argument this doc makes is that the cost of enforcing the security standards via on-cluster compomenets that will consume the OCI artifacts promises to be much cheaper, than the cost being paid by all parties involved of continuing with the current `Bundle Image` and `Catalog Image` formats. ## References * [OCI Artifacts project introduction and scope](https://github.com/opencontainers/artifacts#project-introduction-and-scope) * [A good Medium article on OCI images vs OCI artifacts](https://dlorenc.medium.com/oci-artifacts-explained-8f4a77945c13) * List of some of the tools that could be used to implement operator/catalog manifest packaging and shipping: - [oras](https://oras.land): The opening statement in their documentation is "Registries are evolving as generic artifact stores. To enable this goal, the ORAS project provides a way to push and pull OCI Artifacts to and from OCI Registries." - [imgPkg](https://carvel.dev/imgpkg/docs/v0.34.0/): this tool also has a concept of [bundle](https://carvel.dev/imgpkg/docs/v0.34.0/resources/#bundle)(!!), focus on artifact immutability (think immutable operator bundles) and allows references to dependent images (think related images(!!)), all to specialize in copying artifacts across registries (think disconnected env use case). The caveat with this tool though is that it again requires special packaging steps to adhere to the concepts it utilized (think CSV). - [crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane): It's USP includes allowing users to inspect list of files in an image, inspect the diff in artifacts between two images, inspecting image size, [besides other things](https://github.com/google/go-containerregistry/blob/main/cmd/crane/recipes.md). * List of Registries supporting OCI Artifacts ( source: https://oras.land/implementors/) - [CNCF Distribution](https://oras.land/implementors/#cncf-distribution) - local/offline verification - [Azure Container Registry](https://oras.land/implementors/#azure-container-registry-acr) - [Amazon Elastic Container Registry](https://oras.land/implementors/#amazon-elastic-container-registry-ecr) - [Google Artifact Registry](https://oras.land/implementors/#google-artifact-registry-gar) - [GitHub Packages container registry](https://oras.land/implementors/#github-packages-container-registry-ghcr) - [Bundle Bar](https://oras.land/implementors/#bundle-bar) - [Docker Hub](https://oras.land/implementors/#docker-hub)