# OCIv2 Proposal Brainstorm # **NOTE:** Editing for this document closes on 2020-06-21 ([Anywhere on Earth][AoE]). Any proposals for new requirements after that date will require restarting the process for the OCIv2 specification, which folks are unlikely to want to do. [AoE]: https://en.wikipedia.org/wiki/Anywhere_on_Earth **Contributors:** <!-- Add your name to this list if you've participated in this document. --> - Aleksa Sarai <asarai@suse.de> - Tycho Anderson <tycho@tycho.ws> - Nisha Kumar <nishak@vmware.com> - Bo Liu <obuil.liubo@gmail.com> - Akihiro Suda <suda.kyoto@gmail.com> - Wei Fu <fuweid89@gmail.com> - Kohei Tokunaga <ktokunaga.mail@gmail.com> - Peng Tao <bergwolf@gmail.com> - Huiba Li <lihuiba@gmail.com> - Sargun Dhillon <sargun@sargun.me> - Till Wegmüller <toasterson@gmail.com> - Jiang Liu <liuj97@gmail.com> - Jin Zhang <zj3142063@gmail.com> - Jimmy Zelinskie <jimmyzelinskie+git@gmail.com> As dicussed on several OCI calls, and spoken at many conferences, the current OCI image-spec has some fairly large flaws. These were a natural consequence of the decision to make `v1.0.0` of the image-spec backwards-compatible with the Docker image specification (at least when it comes to the layer format) to allow for large public registries to perform metadata-only migrations to OCI. Whether or not this actually had an impact on OCI's adoption is unclear, but that was the route taken with the original OCI image-spec. For some background reading on the specific issues with one aspect of the OCI image-spec (specifically the use of `tar` archive layers), one of the contributors to this document wrote a [blog post motivating this work][cyphar-ociv2]. However, image-spec `v1.0.0` has long since been released, and it is now time to discuss what issues exist and how they may be resolved. In order to avoid a flurry of prototypes which do not capture all of the key use-cases folks have, the development of an "OCIv2 image-spec proposal" will be done in several stages: 1. Brainstorm the requirement folks have, the key issues with the current format which causes it to not fulfil the requirement, and the motivating use cases for the requirement. Each requirement MUST have at least one contact associated with it, so that they can be contacted for follow-up questions. *(We are here.)* 2. Pare down the list of requirements to a set of "key requirements" with all stakeholders present, with discussions about whether any listed requirement can be merged or dropped from the list. 3. Discuss rough proposals to attempt to come up with a solution for the key requirements agreed upon in (2). If it is deemed to not be possible, then we go back to (1) and (2) and discuss which requirements can be dropped to make a solution viable. 4. Build prototypes to further solidify the proposals discussed in (3). 5. Work on a formal specification (as a draft), and possibly merge it *as a draft* into the image-spec. 6. Get the new format in production use (in an experimental capacity) to allow us to see how well the format deals with real users, and adjust the specification to match real-world experiences. 7. Release an OCI image-spec release with the new specification. 8. Have a launch party. Please note that while the code-name of this project is "OCIv2", we will likely not have to release a `v2.0.0` of the OCI image-spec. Ideally our proposal should simply be a backward-compatbile extension to the existing specification and not require us to bump the major version. [cyphar-ociv2]: https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar ## Requirements ## If you wish to add a requirement, please add it as a new section. If you feel that your requirement fits within an existing section, feel free to add your comments but make it clear that you are extending an existing section (by adding your name to the contact list, and adding a new subheading). <!-- ### TEMPLATE ### - Contact: Jane Citizen <jane.citizen@example.com> - Contact: Joe Citizen <joe.citizen@example.com> *Overview of requirement.* #### Existing Specification Limitations #### *Why is the current spec not up to task to solve this issue?* #### Motivating Use Case #### *What real use cases does this requirement solve?* #### Prior Art #### *Are there an existing projects we can look at for inspiration? Can we re-use them?* --> ### Reduced Duplication ### - Contact: Aleksa Sarai <asarai@suse.de> - Contact: Akihiro Suda <suda.kyoto@gmail.com> - Contact: Peng Tao <bergwolf@gmail.com> When downloading an image, users should not have to re-download large quantities of data they already have locally (often base image updates only update a handful of packages -- but because all of the rootfs is stored in a single tar blob, users have to redownload the entire rootfs each time). In addition, minor container image updates result in senseless duplicated storage costs. This is a fairly requirement and encompasses several distinct proposals which will need to be evaluated and discussed separately: * Deduplication during image transfer through various methods such as exposing the root filesystem (or just the data part of it, as metadata is less deduplicable) as separate OCI content-addressable blobs. * The use of binary deltas (aka binary diffs) for image transfer. * The use of (multipart) HTTP Range requests to opportunistically download the minimum amount of data required. * Having a format which is friendly towards reflink- and hardlink-based runtime deduplication for the mounted container root filesystme. * Having a format which is deduplicate friendly such that a chunk hash is encoded into the format itself, e.g., the ZFS merkle tree. #### Existing Specification Limitations #### The current design of OCI images is such that container filesystem data is laid out in a set of completely opaque (and very large, in the case of base images) tar blobs. The usage of multiple layers can be used to reduce duplication, but practically speaking this is dwarfed by the costs of base image updates -- and users really shouldn't have to care about the implementation of the OCI image format when building their images. In my view, being forced to expose layers to users directly (as tools such as Docker, umoci, and buildah do) is evidence of a flawed design. This requirement will likely result in hte removal of the concept of layers from the underlying storage format (build tools can still use layer-like caching, but it will be based on snapshots rather than being baked into the image format). [I (Aleksa) wrote a blog post further explaining the issue.][cyphar-ociv2] #### Motivating Use Case #### There are several use cases motivating this requirement: * Reducing necessary internet traffic for image retreival reduces the time it takes to spawn a container (image pulling is often the most expensive part of starting a container). * Decreasing duplication for container images when running containers can increase the density of container systems, as well as providing more efficient runtime because of more efficient page cache usage (if `libc.so` is the same inode for containers with different images, they all use the same page cache entry under Linux). * ... #### Prior Art #### There is a fair amount of prior art on most aspects of this requirement: * [casync][casync] intended to solve the directory redistribution problem using content-defined chunking and a clever rethinking of a `tar`-like linear archive format. It's not clear whether this is a perfect fit for OCIv2 (given the lack of a formal specification and a few other conflicts with other requirements listed in this document), but we can definitely ~~steal~~ be inspired by their work. * [FILEgrain (abandoned)](https://github.com/AkihiroSuda/filegrain) deduplicates files, but this file-grained deduplication turned out to be too coarse, as images tend to have almost same but slightly different files. * [Image Packaging System][ips-openindiana] deduplicates files in in a repository by addressing them by content hash. However this falls into the same problem as FILEgrain. While sufficient for a distribution, it falls flat in case where many software programs are compiled from almost the same sources. There was an effort made by disecting the ELF format and strip of the metadata of the compilation but that got only used to an extent before development got axed. * [Dragonfly image service] is using a chunk level deduplication and separates file metadata and data, so that metadata won't affect data duplicity. * DADI defines a novel layered image format based on virtual block device, on which a conventioanl file system is made, such as ext4, xfs, btrfs, etc. Thus modifying files (attributes) doesn't incur the copy-up operation; only newly written data is stored in the new layer. In this way DADI reduces duplication. See section [lazy fetch] for more details. * [rkt (EOL)] was originally designed to use "file sets" to union together an image of the disk, rather than having any particular order. [casync]: https://github.com/systemd/casync [ips-openindiana]: https://github.com/openindiana/pkg5 [Dragonfly image service]: https://github.com/dragonflyoss/community/issues/9 [rkt (EOL)]: https://github.com/rkt/rkt ### Canonical Representation (Reproducible Image Building) ### - Contact: Aleksa Sarai <asarai@suse.de> - Contact: Peng Tao <bergwolf@gmail.com> Two independent OCI image builders should produce bit-wise identical root filesystem representations in an OCIv2 image, given an identical root filesystem. This should be mandated by having a canoncical representation of the image-spec (such that any alternative representations are seen as being invalid images). This supplements the [reduce duplication requirement](#Reduce-Duplication), but also provides additional verification of image. If we end up using content-defined chunking directly in the OCIv2 format, this requirement becomes far more important because it is necessary to ensure that file chunks are properly deduplicated when produced by different OCI image builders. #### Existing Specification Limitations #### The `tar` format doesn't have a formal specification, and different implementations have wildly different ways of representing different aspects of the filesystem. This doesn't produce any incompatibility (since all implementations support each others' quirks), but it does result in different image build tools producing slightly different root filesystem representations. [I wrote a blog post further explaining the issue.][cyphar-ociv2] #### Motivating Use Case #### The primary use-cases related to reducing dupliation are similar to the [reduce duplication requirement](#Reduce-Duplication), but in addition it will allow OCI images to be far more reprodcible (allowing independent verification of an image build). It will also allow us to ensure that all image builders are spec-compliant in a much more rigorous way (rather than permitting all sorts of ambiguity). Reproducibility is important for making sure that we actually produce deduplicable images. Reproducibility comes in two formats: metadata reproducibility (e.g., inode attributes like owner, permission bits and access times), and data reproduciblity. While metadata reproducibility is hardly achievable in different build environment, data reproducibility is more realistic to persuit, and can be used to increase image deduplication. #### Prior Art #### * [Reproducible builds][reproducible-builds] is very related to this topic, but doesn't directly apply to us (since their goal is to build the binaries in the root filesystem reproducibly). * [Image Packaging System][ips-openindiana] chose to not be a bit exact image format but rather focus on the data that could be reproduceably copied between images. Thus several metadata parts got represented as text, which are not bound to the representation of the underlying filesystem. This still enabled IPS to verify and repair the consistency of an Image as long as any remote reposity kept the metadata and data available for download. Additionally one can delete all files tracked in this way, copy the metadata over into a new filesystem/dataset and let ips download replicate the Image from the local mirror on site. In IPS terminology called de-hydration/hydration. [reproducible-builds]: https://reproducible-builds.org/ ### Explicit (and Minimal) Filesystem Objects and Metadata ### - Contact: Aleksa Sarai <asarai@suse.de> - Contact: Peng Tao <bergwolf@gmail.com> - Contact: Nisha Kumar <nishak@vmware.com> This requirement is spiritually similar to the [canonical representation](#Canonical-Representation-Reproducible-Image-Building) requirement. Ideally, images should not contain information that is not necessary for the ordinary running of the container image nor any information which is explicitly host-specific. Examples include: * Access, creation, birth, and modification timestamps of files and directories. Many of those timestamps cannot be recreated after the image has been created (making them useless information) and they reduce the deduplication of images because they are metadata which effectively no user cares about -- why does it matter when a file that was part of a container image was last touched? The build timestamps are already available as annotation metadata in OCI descriptors. * xattrs which are host-specific such as SELinux labels and NFSv4 ACLs. * As an aside, the `security.capabilities` xattr has very specific container-related functionality which image builders need to be aware of, and it's unclear how much of that should be specified by the OCIv2 spec. * It's unclear whether things like filesystem attributes (such as immutable or append-only) make sense to include in container images, given how their lack of inclusion by most OCIv1 image builders has not been noticed by the public. * Device `major:minor` numbers are host-specific in many cases, and thus having them in the filesystem representation of the image makes little sense. Since the OCI runtime-spec has the facility to dynamically add device inodes, it seems strange to represent them as part of the filesystem rather than as a part of the image's configuration. However it should be noted that the OCI runtime-spec also uses the `major:minor` numbers as well. #### Existing Specification Limitations #### The OCIv1 specification simply re-uses tar which was intended as an archival system, rather than as a "lowest common denominator" representation of a filesystem. In addition, the usage of tar is very under-specified meaning that certain extensions (such as xattrs, filesystem attrs, timestamps, and so on) aren't uniformly enabled by all builders. We need to agree on what exact filesystem objects and metadata it makes sense to include in our images. Not to mention that some objects and metadata may not be portable across systems and different tar extraction tools. #### Motivating Use Case #### Some file metadata such as timestamp get in the way of verifying [reproducibility of builds](https://reproducible-builds.org/), and they also add to the duplication of image metadata. Due to a lack of specificity in the specification, different image builders had different levels of support for certain file metadata (examples include the Docker Hub being unable to build images that contained `security.capabilities` labels due to their use of AUFS). This results in needless frustration by users of such tools, and can cause inter-operability issues between different container runtimes. Having a well-specified set of metadata that needs to be included in an image avoids compatability issues, and making that set as minimal as possible allows for our images to be as small as possible. File permissions could perhaps be controlled via the configuration with USER rather than having them in the filesystem. This allows external scanners to read the files without requiring the appropriate permissions and perhaps allow for a more portable filesystem. Copy on Write filesystems include whiteouts and opaque files which stay in the filesystem, often times causing issues with scanners as they are empty. There should be a canonical representation of what files got deleted to prevent the inclusion of whiteout files in the image. #### Prior Art #### * Some of [Debian's image build tools][debian-build] strip out timestamps and other metadata that is not useful for container images. They also noticed that for some formats, setting the timestamp to `0` reduces the size of their images. * systemd doesn't use `major:minor` numbers directly for their `DeviceAllow=` configuration, but instead makes use of the textual names listed in `/proc/filesystems`. This seems promising but still has issues when it comes to the minor number (not to mention some major numbers like `5` contain a wide variety of device types). [debian-build]: https://wiki.debian.org/SystemBuildTools ### Mountable Filesystem Format ### - Contact: Tycho Anderson <tycho@tycho.ws> - Contact: Aleksa Sarai <asarai@suse.de> - Contact: Peng Tao <bergwolf@gmail.com> - Contact: Jiang Liu <liuj97@gmail.com> In order to provide certain guarantees of the integrity of the filesystem (namely, that the image being run is actually the exact same image as the one signed) it should be possible to implement a *read-only* kernel filesystem driver for the OCIv2 image format. Users could then use `overlayfs` to create a read-write layer if required. It's feasible to directly mount the image from a global shared readonly storage system. The image file system metadata may optionally provide prefetching hints to improve performance. #### Existing Specification Limitations #### The OCIv1 specification can already fulfil this requirement through explicit extension by simply replacing the `tar` layers with `squashfs` layers. Unfortunately this makes such systems non-standard, and completely blocks interoperability. In addition, if we wish to migrate away from `tar`-based layers then this use case will not be helped by any of the work we do for OCIv2. #### Motivating Use Case #### Certain businesses have stricter requirements for running container images, and would like to be absolutely sure that the code being executed is exactly the same code as was signed. Having an extraction stage adds additional point of attack (the code may be tricked into extracting images incorrectly). In addition, once extracted it's possible for privileged programs to be tricked into modifying the rootfs (invaliding the signature) and systems to stop such modifications being undetected (such as IMA) are complicated to configure and thus add even more points of attack. #### Prior Art #### - [squashfs][squashfs] The primary example of prior art (which is usable with OCIv1) is [squashfs][squashfs]. However it conflicts with many of the other requirements mentioned in this document (it would have to be used in a layered mode as in OCIv1, since it is literally a full filesystem). [squashfs]: https://en.wikipedia.org/wiki/SquashFS - FUSE-based file systems Other examples are fuse file systems that many prior works have implemented. And a kernel file system can be implemented if a user space file system proves to be satisfying most of our requirements. - DADI DADI has a novel layered image format, and the layer blobs can be mounted directly without unpacking. See [lazy fetch support] section for more details. ### Bill of Materials ### - Contact: Nisha Kumar <nishak@vmware.com> - Contact: Aleksa Sarai <asarai@suse.de> It should be trivial to identify the contents of an image, and verify that the image contains only trusted software components (presumably from a trusted vendor). This should be possible without extracting the root filesystem, and ideally should be possible purely by looking at metadata blobs. Ideally this would be done through a standardised Bill of Materials (BoM) format which includes cryptographic signatures from the vendor to assert that a particular software package exists. #### Existing Specification Limitations #### `tar` archives are flat files which cannot be seeked, so tools are forced to scan the archive in order to determine what software they contain. There is currently no standard way to represent a BoM with OCIv1, so all scanning tools have their own schemes for determining package versions -- but they can't be sure if any additional non-distribution software is present in an image. Furthermore, the ability to add filesystem changesets on top of already distributed filesystems complicates tracing back the software supply chain to original contributors to the container image. License compliance often requires that sources be available along with the container image. Current SBoM specifications assume this can be provided with a HTML standard link. Due to the use of content addressable pointers in container images, the actual download location of the artifacts is vague. #### Motivating Use Case #### Being aware of the exact software versions and the corresponding vendors across all container images allows for automated alerting and updates to such containers without needing to depend on scanning solutions (which, despite their best effort, don't know as much about the image contents as the original image builder -- especialy in the case of distribution base images). #### Prior Art #### [BuildSourceImage](https://github.com/containers/BuildSourceImage) is used to pull source archives using the same client tools as the ones used to pull the container images themselves. A strawman of how the source tarballs are stored can be found [here](https://hackmd.io/dbRUEQRSQ268e9YWm1u2Pg) There are many SBoM standards one can look to for identifying what metadata end users need to extract from a container image. - [SPDX](https://spdx.github.io/spdx-spec/) - [CycloneDX](https://github.com/CycloneDX/specification) - [3T](https://www.it-cisq.org/software-bill-of-materials/) [Tern](https://github.com/tern-tools/tern) is a SBoM generator tool for container images. It generates SBoMs in a number of formats, SPDX being one of them. It can be used to get a sense of the size and content of a SBoM document and what metadata is relevant. ### Lazy fetch support ### - Contact: Bo Liu <obuil.liubo@gmail.com> - Contact: Kohei Tokunaga <ktokunaga.mail@gmail.com> - Contact: Akihiro Suda <suda.kyoto@gmail.com> - Contact: Wei Fu <fuweid89@gmail.com> - Contact: Peng Tao <bergwolf@gmail.com> - Contact: Huiba Li <lihuiba@gmail.com> - Contact: Jin Zhang <zj3142063@gmail.com> - Contact: [OCI Distribution maintainers](https://github.com/opencontainers/distribution-spec/blob/master/MAINTAINERS) The goal is simply to minimize the time spent on pulling images and start a container as fast as possible. From [FAST '16, Slacker: Fast Distribution with Lazy Docker Containers](https://www.usenix.org/node/194431), "Our analysis shows that pulling packages accounts for 76% of container start time, but only 6.4% of that data is read." #### Existing Specification Limitations #### Prioring to starting a container, images must be pulled and unpacked to the runtime filesystem bundle. While only a small fraction of an image is used by applications, runtime has to wait for an entire container image to be pulled before creating new containers. And it needs to improve current authentication workflow for security because lazy fetch always needs fresh token to get data. Validation is also needed in lazily-pulled chunk granularity. #### Motivating Use Case #### In the multiple-tenancy environment, by nature, one tenant is not permitted to use another tenant's container image without authorizations, thus it's not possible to share underlying layers, which means almost every time a container image needs to be pulled if the tenant starts a container. #### Prior Art #### - [CernVM-fs](https://cernvm.cern.ch/portal/filesystem) CernVM-fs implments "lazy fetch" through fuse and deploys a Content Addressable Storage in the local environment. - [crfs](https://github.com/google/crfs) From its [Readme](https://github.com/google/crfs/blob/master/README.md), "CRFS is a read-only FUSE filesystem that lets you mount a container image, served directly from a container registry , without pulling it all locally first." It also leverages Stargz, a Seekable tar.gz format, which addresses the problems that tar.gz files are unindexed and unseekable. A stargz layer is fully compatible with a legacy OCI tar.gz layer. - [Stargz Snapshotter & eStargz](https://github.com/containerd/stargz-snapshotter) containerd implementation of stargz. Comes with an extended version of Stargz ("eStargz"). eStargz contains optional manifest for optimizing prefetching. - DADI (Contact: Huiba Li <lihuiba@gmail.com>) DADI is a brandnew solution deployed extensively in Alibaba's production environment. It redesigns an image as a layered block device, on which a conventional file system (such as ext4) is made. It supports mounting an image file system served directly from a container registry, or other data sources like NFS or HDFS, etc. In addition to layering, DADI retains the feature of compression, and introduces online decompression, thus DADI layer blobs don't need to be decompressed nor unpacked in order to launch containers. And thus the proposal for Mountable Filesystem Format could be possibly satified by DADI. DADI also realizes an optional writable layer, so as to run containers without the help of an union file system. More materials and source code is forthcoming. - [Dragonfly image service](https://github.com/dragonflyoss/community/issues/9) allows lazy fetching the rootfs image by only downloading the file data accessed by applications. ### Extensibility ### - Contact: Peng Tao <bergwolf@gmail.com> - Contact: Bo Liu <obuil.liubo@gmail.com> The goal is to define a extensible image format so that we can add extended features to the format without breaking backward compatibility. #### Existing Specification Limitations #### The current specification has some level of extensibility at the manifest level. However, at the binary format level (tar.gz), there is no extensibility. #### Motivating Use Case #### Users may want to sign the image, or cerntain files inside the image. Instead of forcing every image to follow the same signing behavior, we can define it as an extension to the image format so that images may or may not contain signatures. Other use cases may include different compression althorithms etc. ### Verifiability and or repairability ### - Contact: Till Wegmüller <toasterson@gmail.com> The Goal of verifyability is to check for intentional or acidental image corruption on a per file basis. So that a container runtime can warn the user or fix the damage automatically, depending on use case. This untangles the Requirement for verifiabilty from the **Mountable Filesystem Format** Requirement in order to increase the chance this makes it into the spec. #### Existing Specification Limitations #### The existing spec does not keep track of per file checksum's. The container runtime, needs to unpack and checksum every file in all tar layers to get a hash to verify one file. Extending that to all files in an image results in an imense cost in computing resources for that process. Another way is to keep an unpacked image file in a seperate read-only filesystem as source of the verification process. #### Motivating Use Case #### When I have disk failures with data corruption and I have to make sure it is fit for production after a data recovery, having the capability to just ask the system to verify integrity and optionally repair any damage it finds has saved many otherwise doomed machines before. While not needed in distributed systems, containers are more than often used as software deployment tool for single systems. #### Prior Art #### * [Image Packaging System][ips-openindiana] has two commands to verify and repair images be that the Container Host or the containers. IPS being a packaging system/Image hybrid makes exceptions for files not handled by the packages. * [Dragonfly image service](https://github.com/dragonflyoss/community/issues/9) stores both metadata checksum and data checksum for each file that can be used for end-to-end integrity check. ### Reduced Uploading ### - Contact: Sargun Dhillon <sargun@sargun.me> When uploading an image, users should not have to re-upload large quantities of data that the server may already have. For example, if there are two repositories on the server, ubuntu, and myapp -- and myapp incorporates the ubuntu image, the user, upon pushing myapp should not need to do so. Approaches to solving this problem require a storage format change, plus a registry protocol extension. We need to prove that the user has the image that they claim to have. We therefore need a mechanism of challenge response authentication to ensure the user has the blob they claim to have. This can be implemented by using an existing hash function like BLAKE2b, and storing the intermediate state of the hash function prior to compression, and generating the final hash. The authentication protocol can then allow the server to cheaply append data to the file, and calculate the final hash. The server can then ask the user to do the same to prove that the user has the data. #### Existing Specification Limitations #### The current design of OCI images has the blobs stored by hash, and no metadata associated with it such as intermediate hash state. Therefore, in the current approach this needs to be recalculated each time. There are registries which support cross-repo blob mounts, but that's further complicated by the fact that the user has to know about all the other repos (and have access to them) on the server. #### Motivating Use Case #### There are several use cases motivating this requirement: * Reducing necessary internet traffic for image upload reduces the time it takes to build and push a container. This is usually the most expensive part of spinning up a (new) app. In addition, when a new "ubuntu" release comes out, it cascades into slowing down everybody's first new build. #### Prior Art #### * [Proof of data posession](http://cryptowiki.net/index.php?title=Proof_of_data_possession): A "set" of cryptographic protocols to build on top of. ### Untrusted Storage ### - Contact: Sargun Dhillon <sargun@sargun.me> It is valuable to be able to use third party storage systems to store your repositories. Such examples are S3, and [peer-to-peer filesystems](https://github.com/miguelmota/ipdr). Unfortunately, these systems have weak privacy guarnatees, as it's easy to misconfigure them, or they do not implement privacy at all. We would need a way to tell the clients that the blobs are encrypted, and to decrypt them with a given key prior to verifying / launching them. In the ideal case, the manifest file would be able to reference these encrypted blobs, and indicate which key to use to decrypt them. #### Existing Specification Limitations #### There's no storage in the image formate to indicate that there is a need to decrypt layers. Although one can be added, it would be beneficial to have a standardized key format that can be used to indicate to the user which key they can use to decrypt the image. #### Motivating Use Case #### There are several use cases motivating this requirement: * Image caching, where the image should be unavailable for launching while the machine is not running the workload * Utilization of untrusted, or semi-trusted storage services #### Prior Art #### * [Tardigrade](https://tardigrade.io/): A storage system in which the end nodes are untrusted * [Tahoe-LAFS](https://tahoe-lafs.org/trac/tahoe-lafs): A distributed storage system with untrusted nodes