owned this note
owned this note
Published
Linked with GitHub
# Questions about decentralized container registries
* Software distribution is a natural and widely applicable use case for web3 technologies. Decentralised storage and blockchain based authentication provide new resilience and shared ownership features for software projects.
* A further abstraction of the concepts defined in the OCI spec could enable tooling that is agnostic between protocols and between centralized and decentralized solutions.
## Components of an OCI artefact registry
Container images are distributed with the following structure:
* Manifests containing lists of blobs, which may include filesystem layers and other artefacts. These lists play the role of "dependencies" in other software distribution paradigms.
* Name + tag data, with some expectation of having an unequivocal associated object within a given namespace.
* Author data, which should be some type of authentication receipt.
* "Advisory" data such as documentation and URLs.
Container registries are endowed with the following operations:
* Pull = download. May require authentication.
* Push = write (tags,blobs,manifest). Nearly always requires authorization, which can take several forms. Tags can be overwritten.
* I would distinguish *uploading* (any type of data, raw blobs) from *publishing* an artefact (creating or updating a tag).
* Content discovery = ls. Obtain list of tags from a repo.
* Content management = rm. Delete or unlink content. May require different permissions than write.
* Referrers API.
* Something about "lifecycle" management.
Some data types:
* Blobs
* Manifests
* Addresses (name resolution targets)
* Tag lists
* Repo lists (not in OCI spec)
* Blob lists (not in OCI spec)
## Addressing and names
Container images are addressed by content digest or by name+tag (in the context of a particular registry/namespace). How are these resolved in decentralized registries?
### Name resolution
An OCI registry is addressed by a two-layer hierarchy of names:
1. Repo name. Each repo has its own owner + authorization set.
2. Tag. Tags are further divided into content digests and user-specified tags. Among user-specified tags, the owner may wish to commit to some being immutable (e.g. fully specified version numbers). Further granularity of authorizations by tag type is optional.
A name resolution service must therefore be able to:
* Respond to read queries by delegating to a suitable tag database, indexed by repo name.
* If the OCI Content Discovery method is implemented, respond to read queries with a complete list of tags.
* Authorize write requests -- further subdivided into tag creation and tag updating -- with custom authorization parameters for each name.
Some possible implementations:
* Server path fragment. Resolved by a filesystem or other database handler. Guarantees depend on the server implementation; will generally be unequivocal if the target is an immutable filesystem.
* DNS. Name resolves to a "record set" which with additional assumptions can be used to construct a fetcher. Low safety and authenticity guarantees, which can be enhanced with things like DNSSEC.
Bear in mind the analogy "record set" = "tag list."
* ENS. Name resolves to a "record set." High safety, liveness, and authenticity guarantees. Each domain has its own owner and authorization set, so an ENS domain provides the features necessary for resolution and publishing of a single repo.
It would be natural to use the `contenthash` interface (ERC-1577), except this only supports one contenthash per name.
**Question.** Is there some benefit to be gained by deploying an ENS resolver for each artefact registry? This resolver could be optimized to handle the specific record types (tags) used in these registries.
### Content-addressed blobstore
The OCI image spec includes the structure of a content-addressed blob store with variable digest function. Blobs may be structured manifests including references to other blobs or archived filesystem layer data. They have variable size and in practice only SHA256 and SHA512 hashes are officially supported.
Hence, container registries can be addressed by hash. The address space is distinct from that of the underlying storage medium.
* Immutable (append-only) and verifiable if the resolution target is also a content hash.
* Could this be a good use case for EAS? (cf. claims about artefacts)
* What about ENS names with multiple content hashes and supporting reverse lookups?
* If resolution target is not a content digest (e.g. if DNS A records are used), a dynamic database is needed.
* Any of the above tools can be used, but may be unnecessarily expensive.
* Resolution target can be a content-addressed data store like IPFS/Swarm/Nix. We must be careful about the hash algo and chunk size limit. Correctness should be quickly verifiable on a streaming basis.
* Or maybe there is already some blob/object store implemented directly on top of Swarm?
* Some shared artefacts not tagged in any specific repo may be retrieved this way.
**Question.** What are the requirements for each type of data read, write, update, delete operation used when interacting with a container registry? What are cost effective ways to achieve them?
#### Content addressing schemes
Swarm has its own content addressing scheme which is not one of the OCI digest schemes supported by default (namely, SHA256 and SHA512). Introducing it as a new scheme to OCI would require extending OCI tools, e.g. for building container images.
OTOH, implementing a content-addressed blobstore on Swarm with a different scheme requires some engineering R&D.
### Registry and repo manifests
The "Content Discovery" function of OCI registries requires a method to assemble a list of tags for a repo. If tags are immutable, the data structure for this is a CRDT with merging. If deletion of tags is allowed, the situation becomes more complicated, but there are various CRDT approaches to allow removals too.
For other purposes it may be useful to retrieve a list of repo names or content-addressed data.
### Mirrored repositories
Apparently existing container ecosystem tools do not support registries that contain both authoritative and mirrored repos. Does web3 solve this?
## Authentication and authorization
In the context of artefact registries, **authentication** consists of validating that a particular artefact was released under a name by the holder of some "identity." Authentication is not part of the OCI spec, but is provided by other tools such as PGP-based methods or Notary (a.k.a. the "Update Project"). Authentication methods can have online or offline token generation and online or offline verification.
An OCI artefact registry is responsible for implementing its own authorization protocols for artefact publication.
### Considerations for authorization protocols
We get the best liveness properties when the "authorization server" (smart contract wallet) is in the same place as the restricted resource (e.g. nameservice).
Examples:
* Authorization to update field in onchain K-V store (e.g. ENS, Genome). Owner wallet should live on same chain.
* Authorization to use "quota" (funds) to obtain "auth tokens" (Swarm postage stamps). Authorization server is Gnosis.
* Authorization to upload data to blobstore. Auth server is Gnosis (not the same as the verifier, but core to the design of Swarm).
* Authorization to upload data to 3rd party cloud blobstore. Authorization is an access token which is verified within the cloud.
It's natural to assume that an on-chain registry would use the chain (e.g. Gnosis) as an authorization server.
### Ethereum as authorization server
* Ethereum based authentication require the verifier to have a live connection to an Ethereum node. This is the main blocker highlighted by the SCITT working group to getting attestation services running with a chain. Advances in light nodes and validity proofs may fix this problem.
**Question.** What are some use cases where the online verifier requirement is problematic?
* Ethereum methods offer a number of unique options for shared ownership based on the idea of smart accounts.
* Safe or DAO as shared management of software repository.
* Linking of ownership of the repository (i.e. authorization to publish tags under a name) with ownership of the underlying storage (authorization to upload or delete the underlying blobs).
**Question.** What kinds of projects would benefit from this type of ownership?
* OTOH token-based methods do not generally need online verification, even if the supplicant needs a live connection to the authority to obtain the token. Some tools allow shared ownership here:
* Cryptographic/self-sovereign: Shamir secret sharing, threshold signatures
* Authorization server running a custom policy. Trust may be improved using various methods, e.g. RA and TEEs.
* Ethereum can be programmed to verify authorization tokens designed for offline verification, e.g. PGP messages. This is expensive, but may be worth it for the sake of unifying authorization methods in a single server.
**Question.** To what extent can Ethereum tools replicate or surpass the guarantees of the TUF project (Notary) for "obtaining trusted files"? What about key rotation?
### PGP
PGP supports a few very important methods not available natively in Ethereum: delegation, rotation, and revocation. These methods are widely used in software publishing today, so it's important to understand how to provide this option for an Ethereum-based artefact registry.
Delegation and rotation can be provided by account abstraction. Ethereum or a sidechain thereof may be a good choice for a revocation server.
## Payments
* What are possible semantics for "push" operations in the presence of payments?
* Payments for "push" operations can be understood as a part of *authorization*. For example, the authorization might be a payment receipt, signed Swarm postage stamp identifier (offline), or a successful call to a validation function in a smart contract.
* Is it useful to unify the concepts of payments and quotas in cloud resource management?
* What are some realistic payment schemes for upload authorizations? Examples: per upload, per repo subscription, delegated organisation (DAO?) subscription. Do resellers fit into this market?
* Payments may also be relevant for pull or delete (a.k.a. "content management").
* Payments for publication of blobs, manifests, and tags can be bundled.
## Resources
* OCI distribution specification (registry API spec). https://github.com/opencontainers/distribution-spec/blob/main/spec.md
* OCI container image specification
* Notary. https://github.com/notaryproject/notary
* TUF. https://theupdateframework.github.io/specification/latest/index.html
* ENS registry spec (EIP-137). https://eips.ethereum.org/EIPS/eip-137#registry-specification
* Contenthash interface for ENS Resolver. (EIP-1577). https://eips.ethereum.org/EIPS/eip-1577
* CCIP read (ERC-3668). https://eips.ethereum.org/EIPS/eip-3668#gateway-interface
# Tasks
We split tasks into research, performance testing + evaluation, and engineering.
## Research
1. Devise evaluation frameworks for hierarchical K-V stores (HKVS) and content-addressed blobstores (CABS). Criteria should include:
* Bandwidth + latency
* Consistency guarantees
* Authentication guarantees
* Costs
## Testing + evaluation
### HKVS
1. ENS
2. Genome/space.id
3. DNS
4. Local filesystem
5. Production NoSQL DB e.g. Redis
6. Webserver (wrapping local filesystem or other DB)
### CABS
1. Swarm with bzz ids
2. Whatever is built in to Docker registry
## Engineering
### Blobstore
1. Investigate how container tooling would need to be extended to add new hashing schemes.
2. What metadata needs to be attached to blobs? Should/can this be stored separately to data?
3. CAT table encapsulated as SOC (owned by whom?)
4. What interface does a blobstore expose? How does docker registry handle it internally? Answer: https://github.com/distribution/distribution/blob/main/registry/storage/driver/storagedriver.go
5. Are there registry implementations with a pluggable blobstore?
6. In the Docker registry status quo, the uploader of a blob is responsible for ensuring that the content address is correct
### HKVS
1. Understand how space.id / genome works.
2. Understand authenticity guarantees of DNS with DNSSEC