# Supporting local bundle content
## Summary
https://github.com/operator-framework/rukpak/issues/92
Currently the Bundle API only supports referencing a spec.source.image container image, or a spec.source.git git repository. It would be nice to be able to support configuring a Bundle to source local cluster contents, e.g. a mounted ConfigMap or PersistentVolume filesystem path, so we can avoid having to build/push a scratch-based container image or create a git repository to create a Bundle on the fly.
This functionality of creating a Bundle dynamically within the cluster, without creating external sources, is important for the success of deppy. Deppy performs resolution based on a set of Inputs, and returns a resolution result. This will likely be in the form of a ResolveSet Bundle, a collection of BundleDeployments that are managed as one unit. See [this doc](https://hackmd.io/nprZyBwPQqmix94abAiMsA#ResolveSet-bundle-integration) for more information. This ResolveSet Bundle needs to be created on demand, and so having it be backed by a cluster resource like a ConfigMap is desirable.
## Use-cases
1. Deppy creates a ResolveSet bundle dynamically as a result of a successful resolution. The ResolveSet consists of one or more BundleDeployments that rukpak should manage. There is no pre-existing container image or git repository that contains the BundleDeployments, so there needs to be a way to on-demand create a source for a Bundle embedded in a BD. Creating a ConfigMap, storing the BDs in the configmap, and then passing it to the rukpak layer enables deppy to create resources on the fly.
2. Supporting dynamic Bundle creation via a backing source would also be valuable for the dev experience. Instead of manually creating an image, or a git repository, to create a Bundle simply uploading manifests to a ConfigMap in the cluster and then referencing it allows for quicker iteration and a nice dev UX.
## Proposal
The proposed solution is to introduce a new source type, `local`, to the existing union type field on the Bundle API. The first type of local file would be based on a ConfigMap, and later can be expanded to a path on a PersistentVolume. The proposed API could be something like this:
```yaml
apiVersion: core.rukpak.io/v1alpha1
kind: Bundle
metadata:
name: resolveset-aef235c
spec:
source:
type: local # oneOf union type
local:
configMap:
name: cm-resolveset-aef235c
namespace: rukpak-system
provisionerClassName: core.rukpak.io/plain
```
The user specifies an existing ConfigMap on the cluster as part of their Bundle. The mount path (based on `bundle.name`) is then added to the existing `/var/run/bundles` path present in the local storage of the plain provisioner. The directory name would be the name of the bundle. This ensures that the unpacked bundle is treated the same way as other unpacked bundle types. The manifests would be stored in a tar.gz file.
Initially, the API is intentionally limited. Once support for a PV file type is added, the file API can be expanded to include a `path` field that specifies where on the PV the manifests are located.
The ConfigMap will be ownerref'd by the Bundle that references it, so that when a Bundle is removed, the backing ConfigMap is also removed.
## Workflow
1. User has a `manifests` directory locally that contains a set of valid Kubernetes objects as YAML files.
2. User runs `kubectl create configmap my-configmap --from-file=<manifests directory>` which creates a ConfigMap that contains the objects in the directory. The key for each entry is the filename, and the value is the contents of the file.
3. User creates a BundleDeployment referencing the configmap (by namespace/name) using the `local` source type in the template, as in the above example.
4. The provisioner referenced by the Bundle sees the new BundleDeployment, and goes to unpack its contents. The provisioner, as part of its reconcilation loop, reads the contents of the configmap and stores them in a `/var/run/bundles/<bundle-name>.tar.gz` in its local storage.
5. From there, the BundleDeployment behaves the same as any other.
6. When the BD is removed, the underlying configmap gets deleted as well.
## Limitations
* ConfigMaps have a limited size (1MB) and therefore cannot accomodate a large number of manifests. This is a known limitation for all ConfigMaps in Kubernetes and something the user should be aware of. Adding support for a PersistentVolume source will let users work around this size limitation.
## Implementation Checklist
- [ ] Implement new source type with a ConfigMap as the supported backend
- [ ] Write tests verifying the new source type
- [ ] Add docs outlining the new source type