<!--
Rough script:
For those of you who aren't Docker fans the same techniques enable interop with tons of other tools, check it out :)
Rough demo script:
- hey everyone, so I wanted to show you some work I've done on enabling IPFS to be interoperable with existing tooling. In this case the we'll be looking at Docker.
Show demo: `docker pull golang@latest` with side showing that it's getting the data from IPFS.
Looks easy so what's going on behind the scenes.
We have an HTTP proxy intercepting the docker requests, noticing they are looking for hashes and go to IPFS instead.
For example: mdinc download-with
But how does that work under the covers?
First we go to a routing system, here I have a local Reframe delegated routing endpoint.
`someguy ask --endpoint=http://localhost:5555 findprovs <cid>`
We see the proof CID (published by me to the indexers). I might also be serving the data, in this case I'm not. However, web3.storage is I can discover with `someguy ask --endpoint=http://localhost:5555 findprovs <proofcid>`
So I can then do:
`mdinc download <multihash> <proof-cid> <multiaddr>`
and this is what's going on behind the scenes for each HTTP request from docker for a layer.
-->
# Docker + IPFS
<!-- Put the link to this slide here so people can follow -->
slides: https://hackmd.io/@adin/docker-ipfs
---
# Docker Background
- Docker ships around containers + manifests
- Locally uses content addressing - `SHA256(200MB layer)`
- Registries tracking mutable objects (`golang@latest`) and downloading immutable data (`sha256:deadbeef...`)
- Cannot download `golang@sha256:deadbeef` if the registry is inaccessible even if it's present elsewhere
---
## Enter IPFS
- flat address space: (e.g `sha256:deadbeef` not `golang@sha256:deadbeef`)
- means you can fetch manifests, layers, etc. from anywhere
- can be extended to effectively enable decentralized registries when combined with a decentralized mutability scheme
---
## Why hasn't this been tried before?
- It's has, but always needed custom manifests and registries since IPFS has been unable to handle `SHA256(200MB layer)` instead needing to merklize the data with something like UnixFS
- This leverages a [strategy](https://docs.google.com/presentation/d/1WLCMCxzQDaITi93x-wIfChp2O0yMy-24VgkyJ0hhrgY/) for incrementally verifiable large blocks
---
# Demo
---
## How does it work - Recap:
- Ask DHT + Indexers for who has the data
- If someone has it ask them, if there's a proof use it
- Find who has the proof (web3.storage)
- Download the data using the proof (client using Bitswap)
- Return over HTTP to Docker
- Use a local HTTPS Proxy to intercept requests for hashes
---
## Extensibilty - what made this possible
- Using Bitswap -> block-based access enables new schemes without network changes
- Indexer -> allows small arbitrary data to be associated with blocks
- Reframe -> allows upgrading responses to contain new data without network upgrade
---
## Just the beginning
- Can adapt this pattern to many existing package managers (language package managers like Go/npm)
- This demo just does fetching, the data could be served as well
---
## Wrap up
- Can fetch docker resources over IPFS no matter how big
- Can apply this to many existing package managers, hashes are everywhere. Wherever you see hashes addressing data, IPFS may be able to help!
---
## Thanks!
Reach out on #ipfs-implementers if you're interested.
---
{"metaMigratedAt":"2023-06-17T12:42:13.525Z","metaMigratedFrom":"YAML","title":"Docker IPFS Proxy","breaks":"true","description":"Docker IPFS","contributors":"[{\"id\":\"6c027841-8e73-4a4d-8ec7-e99a5271fad7\",\"add\":9625,\"del\":6090}]"}