# Better support of Docker layer caching in Cargo
This document serves as a summary of motivation, existing workarounds, prior art and possible solutions for the Cargo issue [#2644](https://github.com/rust-lang/cargo/issues/2644) (dubbed "original issue" in the rest of the document), which is currently the most commented and upvoted issue on the Cargo repository. The original statement of the issue is "provide a Cargo command for building only the dependencies of a project", however this description does not adequately describe the primary problem, therefore I will mostly talk about "**modifying Cargo to better support Docker layer caching**", which is the primary use-case, as described below.
**TLDR**: Solving this issue for Cargo projects is far from trivial, it's not so easy as e.g. for `npm` or `pip` or other similar build systems, for reasons listed [below](#What-makes-Cargo-different-from-other-build-tools). There are several workarounds that can be used to make Cargo Docker builds faster. Two workarounds that seem to be the most promising are using [`cargo chef`](#Using-cargo-chef) or using [Docker cache mounts](#Using-Docker-cache-mounts). We would be glad if you could comment on whether these workarounds are working for you or not (if not, why not?) on the original [GitHub issue](https://github.com/rust-lang/cargo/issues/2644).
# Use-cases
This section mentions several use-cases mentioned in the original issue that are related to "building dependencies only". The Docker use-case is the primary one and the rest of the document will focus on it specifically.
> In this document, I will use the term `local crates` for crates that are being developed by the user of Docker and Cargo, i.e. the crates of their workspace that are being copied into a Docker image, and which change often. And the term "dependencies" for external crates that are being installed from a remote registry (e.g. `crates.io`) and that do not change as often. There are edge cases to this distinction (e.g. local path dependencies), but I think that this distinction is quite clear and useful for talking about this problem.
It's surely worthwhile to also talk about what exactly a "dependency" is, but I'll leave that for another discussion, this document is already quite long.
- **Faster Docker Cargo builds**
Rust projects can be notoriuously slow to build, and this is exacerbated in Docker builds (either CI or local ones), where running a build after any tiny source code change will typically trigger the recompilation of the whole project, along with all its dependencies.
Slow Docker builds can increase the feedback loop of running CI tests and waste CI worker time by rebuilding the same things over and over again. Sadly, the default ("naive") usage of Cargo with Dockerfiles suffers from this, and will cause recompilations of everything after any source code change (the details are further discussed later).
> Note: Not all CI builds use Docker of course, and there are other ways of speeding non-Docker CI builds (e.g. https://github.com/Swatinem/rust-cache). However, as shown below, it seems that using Docker for building Cargo projects is quite common, both for local development and on CI (as expected, since Docker is a very popular tool).
As is evident in the original issue, many Cargo users find this problematic, and expect that Cargo will provide some way of making it easier to avoid this wasted work ([1](https://github.com/rust-lang/cargo/issues/2644#issuecomment-343520472), [2](https://github.com/rust-lang/cargo/issues/2644#issuecomment-368354185), [3](https://github.com/rust-lang/cargo/issues/5098), [4](https://github.com/rust-lang/cargo/issues/2644#issuecomment-408464097), [5](https://stackoverflow.com/questions/42130132/can-cargo-download-and-build-dependencies-without-also-building-the-application), [6](https://github.com/rust-lang/cargo/issues/2644#issuecomment-413838210), [7](https://github.com/rust-lang/cargo/issues/2644#issuecomment-419633233), [8](https://github.com/rust-lang/cargo/issues/2644#issuecomment-419900808), [9](https://github.com/rust-lang/cargo/issues/2644#issuecomment-425307442), [10](https://github.com/rust-lang/cargo/issues/2644#issuecomment-439057779), [11](https://github.com/rust-lang/cargo/issues/2644#issuecomment-471238110), [12](https://github.com/rust-lang/cargo/issues/2644#issuecomment-475130496), [13](https://github.com/rust-lang/cargo/issues/2644#issuecomment-481730347), [14](https://github.com/rust-lang/cargo/issues/2644#issuecomment-486498915), [15](https://github.com/rust-lang/cargo/issues/2644#issuecomment-1322295124), [16](https://github.com/rust-lang/cargo/issues/2644#issuecomment-1416869914), [17](https://github.com/rust-lang/cargo/issues/2644#issuecomment-537637178)). Especially since several other build tools (similar to Cargo) offer a similar functionality (specifics are described later in this document). Users are especially frustrated because it sounds like a trivial functionality to implement (spoiler alert: it's not trivial), and it is hard to explain what is difficult about it (this document tries to change that).
The goal could be stated like this: **Make it easier to support Docker layer caching of dependencies properly for Rust users, and thus reduce wasted work when building Rust Docker containers**. It is implied that this could be done by modifying Cargo somehow, however that is not the only way to resolve this issue (as we'll see later).
Very large gains can be gained from enabling proper Docker layer caching, as is expected, because it just takes a lot of time to build all dependencies of a Cargo project (from my experience, Cargo projects tend to have tens or hundreds of dependencies). As a datapoint, [this article](https://whitfin.io/speeding-up-rust-docker-builds/) mentions a ~90% time reduction in Docker image build time after making Docker layer caching work with a workaround (existing workarounds are described later in this document). Accumulated globally, this could reduce Docker Rust build times by a non-trivial amount.
There can be multiple motivations for using Cargo in Docker, for example:
- It can be used to provide a containarized Rust development environment, which is used for all kinds of development activities - performing a type check, compiling, running unit tests, benchmarks, formatting, lints, trying if the project compiles with multiple features enabled, etc. In this case, the user might want to execute a lot of different types of builds (`cargo test`, `cargo check`, `cargo build`, `cargo build --release`, `cargo build --features foo`), which could make the cached Docker layers quite bloated.
- It can be used to deploy a Rust application, or provide an environment with external services required for running the Rust application. For example, the user builds a Rust web application which needs PostgreSQL and Redis, and these external services are provided by a Docker image. In this case the user probably executes some production Docker image that simply builds the app with e.g. `cargo build --release`, and other development activities happen outside of the Docker image. The deployment can be performed from CI.
Note that only this use-case will be considered in the rest of the document, the other ones below are described briefly for completeness only. Some solution for Docker caching might also solve the other use-cases, but it is not an explicit goal.
- Profiling/timing of local crate(s) only
Some users noted that they would like to gather better profiling data or timing metrics of the final build of their own (local) crates, without also profiling the build of dependencies ([1](https://github.com/rust-lang/cargo/issues/2644#issuecomment-366272879), [2](https://github.com/rust-lang/cargo/issues/2644#issuecomment-607672765)). As a maintainer of [`rustc-perf`](https://github.com/rust-lang/rustc-perf) myself, I note that this would also be useful for the Rust benchmarking suite (currently it uses various hacks to only measure the final crate).
This use-case is probably closest to the original statement of "building dependencies only" and it could probably be resolved by a relatively simple change that provides a "deps only" build (which has already been implemented several times, however it does not solve the Docker use-case).
Since the issue was created, the `--timings` flag has been added to cargo. This is useful for seeing how long individual crates to build in a one-off manner. However, this doesn't integrate in with a benchmarking process like `hyperfine` to statistically eliminate jitter.
- Prepackaging for Debian/nix
Another use-case is improving the support of preparing Cargo projects for packaging ([Debian](https://github.com/rust-lang/cargo/issues/2644#issuecomment-599526575), [nix](https://github.com/rust-lang/cargo/issues/2644#issuecomment-613024548)). There wasn't a lot of information about these use-cases in the original issue, however it seems that a solution for Docker layer caching could also resolve the mentioned `nix` use-case.
- Generic dependency precompilation
There have been a few mentions of wanting to "pre-compile" the dependencies for various other use-cases ([Rust playground](https://github.com/rust-lang/cargo/issues/2644#issuecomment-278845622), [custom build system](https://github.com/rust-lang/cargo/issues/2644#issuecomment-492333826)). It's unclear to me whether there is some shared pattern here, however it looks like the Rust playground use-case is basically just "better support for Docker layer caching" again.
- Generating documentation for dependencies despite the current code being broken
- A user might be in the middle of a change and want to see the documentation for the dependencies they've added but the code does not compile
- The [unstable `--keep-going` flag](https://github.com/rust-lang/cargo/issues/10496) is another potential way of resolving this
# Problem description
The "naive" and simplest approach for writing a Rust project Dockerfile looks something like this:
```dockerfile
# Copy Cargo.toml
COPY Cargo.toml .
# Copy source files of local crate(s)
COPY src src
# Build the project
RUN cargo build
```
Each executed command creates a new Docker layer (basically a file-system snapshot). When you build a Docker image from the same Dockerfile again, the layers are cached by default. When the inputs of a layer created by `COPY` change, that layer and all following layers are invalidated and executed again. This means that when the Cargo manifests or any source file changes, `cargo build` will run from scratch, downloading the `crates.io` index, rebuilding all dependencies and then rebuilding the local crates, which takes a lot of time.
What would Rust Docker users want, and what is typically possible with other build systems, is this behaviour:
1) When a source file changes, only rebuild the local crates.
2) When Cargo.toml (or a similar manifest/lock file) changes, rebuild everything.
This is based on the observation that during development, the files that specify dependencies (e.g. `Cargo.toml`) change much less often than the actual project source files, and therefore it's wasted work to rebuild the dependencies again and again, even though they will produce the same compiled artifacts each time.
> Note: @ehuss has [mentioned](https://github.com/rust-lang/cargo/issues/2644#issuecomment-599577841) that `Cargo.toml` might also change often and that the distinction between `Cargo.toml` and source file changes is not that important. However, based on the [answers](https://github.com/rust-lang/cargo/issues/2644#issuecomment-486503667) of people in the original issue and my experience, there is a big difference, and users are generally OK with rebuilding everything when `Cargo.toml` changes, as long as dependencies are not rebuilt when only the source of local crates changes, which is the most common scenario.
# Current workarounds
Because the naive approach rebuilds everything from scratch everytime, users use various workarounds, which are described below. The workarounds are somewhat arbitrarily sorted from the most popular and/or most usable to solutions that are legacy, too complex or outright don't work.
> Note: some of the mentioned workarounds can be combined to further speed-up builds (for example Docker cache mounts or `sccache` is partly orthogonal to avoiding rebuilding the dependencies), but here I primarily mention each workaround as a separate solution for dependency Docker layer caching.
## Manually copying `Cargo.toml` and doing a two step build
This is probably the most common workaround that sort-of works for the simplest cases. Users write their Dockerfile like this:
```dockerfile
# First step: build dependencies
# Copy Cargo.toml
COPY Cargo.toml .
# Create a dummy file, to allow compilation to succeed
RUN mkdir src && touch src/lib.rs
# Run cargo build on a "dummy" project
# This only builds the dependencies
RUN cargo build
# Second step: build local crate(s)
# Copy the actual source files
COPY src src
# Run the final cargo build
# Dependencies are already built here, only local crates are built
RUN cargo build
```
By copying only the manifest file, which does not change as often as the source files, to the Docker image, the first `cargo build` builds only the dependencies of the project, and is cached by Docker until `Cargo.toml` changes. When a source file changes, only the last two commands/layers are re-executed, and thus only the local crates get rebuilt, since the dependencies are cached by Docker.
### Advantages
- In the simplest of cases, this works as expected (except when it doesn't - see invalidation problems below).
- It's quite familiar to users of other builds tools/languages, where there is often a very similar pattern (although usually modulo creating the dummy file).
### Disadvantages
Oh boy, are there many :)
- This gets hairy fast for workspace projects ([example](https://github.com/rust-lang/cargo/issues/2644#issuecomment-537637178)), because there are multiple `Cargo.toml` files in nested directories, and they all have to be manually copied into the Docker image. We need to copy all of them, because dependencies can be "scattered" in any of these manifest files.
This is made worse by the fact that Docker cannot currently copy files recursively with a glob while keeping their directory structure (https://github.com/moby/moby/issues/39530), therefore the manifest files pretty much have to be enumerated and their containing directories have to be created manually.
- `Cargo.toml` can contain source file links to build scripts, library/binary entrypoints (`lib.rs`, `main.rs`), etc. For identifying the dependencies, we do not need to know the contents of these files, however if we only copy the manifest to the Docker image and then run `cargo build`, the build will fail if Cargo is unable to find these files. Ideally, we do not want to copy these files to the "dependency" Docker layer, both to avoid unnecessary invalidation and also to avoid enumerating these files in the Dockerfile.
This is also the reason why we need to create the "dummy" file. Even when it's implicit and not specified in `Cargo.toml`, Cargo currently needs at least some entrypoint for the crate to build it (and its dependencies)!
- Cargo config files (`config.toml`) can affect e.g. the compiler flags (among other things), therefore they need to be taken into account when deciding how to compile dependencies. The user thus needs to also copy these files to the Docker image.
- The creation of the dummy file(s) can cause problems with source file cargo file invalidation checking ([1](https://github.com/rust-lang/cargo/issues/2644#issuecomment-606728312), [2](https://github.com/rust-lang/cargo/issues/2644#issuecomment-612998942), [3](https://github.com/rust-lang/cargo/issues/2644#issuecomment-634604514), [4](https://github.com/rust-lang/cargo/issues/2644#issuecomment-667129578), [5](https://github.com/rust-lang/cargo/issues/2644#issuecomment-591363884)). When we copy the actual `lib.rs` file into the Docker image in the second step, the `mtime` (modification time) attribute is not updated, and therefore Cargo might not notice that the file has changed (w.r.t the dummy file), which causes compilation issues.
- Build scripts, path dependencies, patch sections
- Do they pose a problem for this workaround specifically? I haven't been able to find specific evidence for this, but I'm not sure.
Many people try to use this workaround (according to the original issue), with varying degrees of success, but it's clear that this approach is not reliable enough and it's also quite clumsy, as you need to copy the manifest/config files and also create the dummy files manually. The existence and "popularity" of the original issue should serve as a proof that this workaround is simply not enough for Rust users.
## Using Docker cache mounts
Docker has a feature that allows `RUN` layers to use a cache that is persistent across different builds. This allows us to reuse the Cargo index and downloaded dependencies (in `$CARGO_HOME`) and also the built dependencies (in the `target` directory). This approach was described by @masonk [here](https://github.com/rust-lang/cargo/issues/2644#issuecomment-570749508).
Here is an example:
```dockerfile
FROM rust:1.68.1
# Copy all files of the project
WORKDIR app
COPY Cargo.toml .
COPY crates crates
# Perform cargo build with cached Cargo and target directories
RUN --mount=type=cache,target=/usr/local/cargo,from=rust:1.68.1,source=/usr/local/cargo \
--mount=type=cache,target=target \
cargo build
```
To execute this build, you may have to run the build with `DOCKER_BUILDKIT=1 docker build ...`.
### Advantages
- The dockerfile structure is quite straightforward, we can just copy the manifests and source files into a Docker layer, as we would normally, and then perform `cargo build` with an added incantation, which is however (arguably) not that simple.
### Disadvantages
- Docker Buildkit (an alternative Docker backend) is required for this caching to be available. It's currently the default backend on Linux, but only in very recent Docker versions (in older ones it can be enabled simply with an environment variable) and it's not yet available on Windows/macOS (apparently it can be used on Windows with Docker WSL backend). The fact that a non-default build backend has to be used also makes this less discoverable. It also might require some special setup to work on CI (e.g. https://github.com/docker/setup-buildx-action for Github Actions).
- The path to the Cargo home directory has to be specified explicitly (it can differ, based on your base Docker image and how you set up Rust/Cargo). If you use `${CARGO_HOME}` for specifying the path, the build cache will not be reused. This is still an issue after [several years](https://github.com/rust-lang/cargo/issues/2644#issuecomment-570749508).
- The path to the Cargo home directory has to be repeated in the `from=` section, otherwise the cache mount will simply truncate the Cargo directory on the first run and therefore the `cargo` binary will not be available. Actually, I'm not even sure how to do this if you install Cargo e.g. using `rustup` in your Dockerfile, as opposed to using a Rust base image from which you can copy the original Cargo directory. It could be done in a similar way with a multi-stage Docker build, but I don't know how to do it in a "one-stage" build (If anyone knows how to do this, please let me know!).
- Build isolation is reduced, because outside files/directories are being mounted to the image being built. I think that the mounted cache can be actually shared between different images being built, which can further break isolation.
- Using an outside source of the `target` directory, which is reused across multiple builds and it's not clear how exactly is it isolated, might produce compilation errors or miscompilations if `rustc` (re)uses the wrong artifacts.
## Using `cargo chef`
[`cargo chef`](https://crates.io/crates/cargo-chef) is a third-party Cargo command that tries to solve the Docker caching issue by first computing a build plan that describes what dependencies should be built and with which compiler flags, and then uses the build plan to build the dependencies in a separate Docker layer.
Example:
```dockerfile
FROM rust:1.68.1 as planner
RUN cargo install cargo-chef
WORKDIR /app
# Copy the whole project
COPY . .
# Prepare a build plan ("recipe")
RUN cargo chef prepare --recipe-path recipe.json
FROM rust:1.68.1 AS builder
RUN cargo install cargo-chef
# Copy the build plan from the previous Docker stage
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this layer is cached as long as `recipe.json`
# doesn't change.
RUN cargo chef cook --recipe-path recipe.json
# Build the whole project
COPY . .
RUN cargo build
```
`cargo-chef` leverages Docker multi-stage builds and how they interact with layer caching. As long as the `recipe.json` file, which is generated in the `planner` stage, has the same content (i.e. no compiler flags or dependencies were changed), then `cargo chef cook`, which compiles the dependencies, will not be re-executed.
[This article](https://www.lpalmieri.com/posts/fast-rust-docker-builds/) describes the ideas behind `cargo chef`.
### Advantages
- The build plan is generated from the whole project, which has a big advantage. The dependency list and compiler flags are generated from the whole project by `cargo` automatically, therefore everything gets taken into account by default (all `Cargo.toml` files, `config.toml` files etc.). You do not have to manually copy manifests or reconstruct their directory layout, and you do not have to create dummy files.
### Disadvantages
- It sometimes lag behind cargo in supported features
- There are corner cases that it doesn't quite get right
- It is an external tool that has to be installed in the Dockerfile (or provided through the base Docker image). Additionally, the tool has to be installed in both Docker stages (both in `planner` and in `builder` above), but this can be solved by creating an initial stage (or using a base Docker image) where the tool is installed and then shared by the later stages.
- It requires the use of a two-stage build in the Dockerfile. While this is (arguably) not that complex and mostly the same template code can be copy-pasted from the README for each project, not all Docker users might be familiar with multi-stage builds.
- On the implementation side, a lot of work is put into maintaining code similar to cargo which requires a lot of work
## Using `sccache` to cache builds
It is possible to use an external caching system, such as `sccache`, to make repeated compilation much faster (instead of avoiding it altogether).
Example of this approach was provided by @AndiDog [here](https://github.com/rust-lang/cargo/issues/2644#issuecomment-570803246). I won't copy it here, because it's quite complex (even though it probably could be simplified).
### Advantages
- `sccache` will speed-up repeated builds of all files and crates, even the local crates that change often. This can be important for very large Rust projects.
### Disadvantages
- Users need to setup a `sccache` server running outside of the Docker image being built. This might not be trivial.
- Users need to install `sccache` in the Docker image and also setup Cargo in the Dockerfile in a way that it uses the `sccache` `rustc` wrapper. This complicates the Dockerfile in a non-trivial way.
While this might be a viable approach for complex use-cases and potentially very large projects that e.g. use some custom build farms, I think that it's just too complex (and too big of a hammer) for many (most?) Rust projects.
## Using `cargo-build-dependencies`
[`cargo-build-dependencies`](https://github.com/0xmad/cargo-build-dependencies) is a third-party Cargo subcommand that parses `Cargo.toml` and `Cargo.lock` and then compiles all the dependencies found in it using `cargo build -p`.
The original version of the crate was called [`cargo-build-deps`](https://crates.io/crates/cargo-build-deps), however it is not maintained and crashes with new Cargo projects.
Example usage:
```dockerfile
RUN cargo install cargo-build-dependencies
# Copy manifest
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && touch src/lib.rs
RUN cargo build-dependencies
COPY src src
RUN cargo build
```
### Advantages
- I couldn't find any. For the Docker caching use-case, it behaves pretty much identically to a plain `cargo build`. Actually in local testing, `cargo build-dependencies` was worse than `cargo build`, because the dependencies were rebuilt with the former command in the second step for some reason.
### Disadvantages
- Users still have to reconstruct the original `Cargo.toml` layout of the crate/workspace. They also need to copy `Cargo.lock` file (the tool crashes without it, which might be problematic for libraries without a lockfile).
- Users still have to create dummy `lib.rs` files.
- Doesn't really support workspace projects.
- Doesn't handle `config.toml` files.
- It is an external tool that has to be installed in the Dockerfile.
## Using `cargo-wharf`
`cargo-wharf` is a custom Docker "frontend" for building Cargo projects. It requires the BuildKit Docker backend (same as Docker cache mounts) and requires the user to basically reimplement the Dockerfile with a custom metadata specification embedded into `Cargo.toml`.
While it's a cool idea, I think that this strays too much from the normal Docker workflow and it would be too alien(ating) to most users. It also looks that the tool has not been maintained for several years.
# Attempted solutions in Cargo
There have been several PRs to Cargo that have tried to solve the Docker layer caching issue, usually by providing some command that builds only dependencies.
- [#3567](https://github.com/rust-lang/cargo/pull/3567), [#8041](https://github.com/rust-lang/cargo/pull/8041)
- These PRs provide a command that skips the compilation of the local crate(s) and only compiles the dependencies (e.g. only the "remote" crates from a registry).
- I think that it has been proven by `cargo build-dependencies` above that this approach does not work and does not actually improve the Docker layer caching situation in any way. Users still have to copy `Cargo.toml` files manually and create dummy files. It's basically the same as just running `cargo build`.
- [#8061](https://github.com/rust-lang/cargo/pull/8061)
- This is basically the same approach as above, although it also makes a distinction between local and remote dependencies, and tries to handle edge cases like dependency patch sections. But ultimately it also does not solve Docker layer caching.
- [@wyfo's work](https://github.com/rust-lang/cargo/commit/a007e7981a72fff4745c46c75818beaa63ad0308)
- A more recent implementation of a similar idea of providing a "build dependencies only" command. It proposes an idea for avoiding creating the dummy files by forcing the user to explicitly specify crate entrypoints in `Cargo.toml` (e.g. the `[[bin]]` section) and/or by patching Cargo to simply ignore the expected entrypoints in the "build dependencies only" mode.
- This approach still requires manually copying `Cargo.toml` and `config.toml` files.
# How other languages do it? (prior art)
There are several popular languages that have an integrated build system similar to Cargo for which downloading/compiling dependencies in Docker/CI builds also take a lot of time. Below is a non-exhaustive list of such technologies along with a brief description of what is the simplest way of solving the Docker layer caching issue with these technologies.
> Note: Obviously there are many ways to build projects and dependencies in these technologies, I tried to select the most popular/common ones that I know of.
I'll try to classify the individual technologies by the following attributes:
- Complexity: how hard is it to write a Dockerfile with proper layer caching for dependencies?
- Built-in: is everything required for layer caching built-in into the default package manager, or is an external tool required?
- Multi-stage: is a multi-stage Docker build required for the caching to work?
## `Node.JS`
Dependencies of `npm` (Node Package Manager) projects are specified in a `package.json` file:
```json
{
"name": "foo",
"dependencies": {
"typescript": "^5.0.2"
}
}
```
When writing a Dockerfile, users typically first copy the `package.json` file into a Docker layer and then execute `npm install` to download dependencies into the `node_modules` directory. After that they copy the rest of the project and run e.g. some build/compilation/packaging step.
```dockerfile
# Copy dependency list
COPY package.json .
# Install dependencies
RUN npm install
# Copy the rest of the project (source files)
COPY src src
# Perform some build step
RUN npm build
```
There is also a dedicated command `npm ci`, which works with a lockfile (`package-lock.json`) instead of the manifest file. It has dedicated functionality designed for CI environments (like freezing dependencies), but it doesn't help with Docker caching in any special way - that works out of the box already by simply copying `package.json` and running `npm install`.
> Note: `package.json` can contain dependencies stored at local paths, I'm not sure how this is resolved for Docker. It's also probably not that common (?).
**Complexity**: Easy (copy the manifest, run a single command).
**External/built-in**: Built-in, run `npm install` or `npm ci`.
**Multi-stage**: No.
### `Python`
Python has a very complicated build system story, with many competing alternatives (`pip` + `requirements.txt`, `pip-tools`, `Poetry`, `Pipenv`, `conda`, `pdm`, etc.). However, in most (all?) of these approaches, there is a single file (plus maybe an additional lockfile) that describes the dependencies of the project. Similar to `npm`, this makes it easy to copy this file into a separate Docker layer and download the dependencies, without also preparing/packaging the source files themselves.
Example Dockefile:
```dockerfile
# Copy dependency list
COPY requirements.txt .
# Install dependencies
RUN python3 -m pip install -r requirements.txt
# Copy the rest of the project (source files)
COPY src src
# Perform some additional steps (optional)
RUN python3 setup.py install
...
```
> Note: For Python projects, it's probably not that common to perform an additional build step. It's still quite useful to cache the dependencies, so that you don't have to wait e.g. several minutes for all your `pip` dependencies to be downloaded again after you make a tiny change to the source code.
**Complexity**: Easy (copy the manifest, run a single command).
**External/built-in**: Built-in if you consider `pip` to be an ubiqutous tool. You have to install `pip` or the used build system in the Dockerfile.
**Multi-stage**: No.
### `Ruby`
The story is again quite similar to `Node.JS` and `Python`. There is a manifest file called `Gemfile`, which specifies dependencies, and also a lockfile called `Gemfile.lock`. The `bundler` build system can then install dependencies based on these two files.
```dockerfile
# Copy manifest and lockfile
COPY Gemfile Gemfile.lock ./
# Install dependencies
RUN bundle install --deployment
# Copy the rest of the project (source files)
COPY . .
```
**Complexity**: Easy (copy the manifest/lockfile, run a single command).
**External/built-in**: Built-in if you consider `bundler` to be an ubiqutous tool. You have to install `bundler` in the Dockerfile.
**Multi-stage**: No.
### `Go`
> Disclaimer: I'm not familiar with `Go`, so the description below might not be completely accurate.
`Go` defines external dependencies directly by importing them in the source files, which could complicate finding the dependencies for Docker layer caching (?). That being said, a dedicated manifest file that should describe the used dependencies called `go.mod` can be used (together with a lockfile called `go.sum`). With this approach, downloading dependencies should work similarly to `npm` or Python:
```dockerfile
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -v -o server
```
Alternatively, [this guide](https://www.docker.com/blog/containerize-your-go-developer-environment-part-2/) proposes using Docker cache mounts, which seems to be a quite universal solution when there is no built-in support from the build system.
Regardless of the approach taken here, `Go` is designed to be very fast to compile, therefore I assume that the repeated rebuild problem is much less relevant for `Go` than for `Rust`.
### `Elixir`
> Again, I'm not very familiar with `Elixir`, so if anyone finds this inaccurate, feel free to post a comment.
With the `mix` build system, `Elixir` programs can describe dependencies within a `mix.exs` file (together with an accompanied lockfile called `mix.lock`). `mix` then has a dedicated command for building dependencies only, which is called `mix deps.compile`.
Example:
```dockerfile
# Copy manifest, lockfile, and config file
COPY mix.exs mix.lock ./
COPY config config
# Download and compile dependencies
RUN mix deps.get
RUN mix deps.compile
# Copy the rest of the project and perform other operations...
```
## C and C++
The build tools described above are mostly for "scripting" languages, which is quite far from the Rust language. It would be fair and useful to also compare Cargo to C/C++ build systems.
However, (infamously) there is not any universal build system for C/C++. There are tools like `conan` or `vcpkg`, but I haven't found any mentions of Docker dependency caching for them (I'd be glad for any input if someone else has experience with this).
There's support in `CMake` for downloading (simple-ish) dependencies during the "configure" step, but I'm not sure if people actually do that in a separate Docker layer.
I *think* that for most C/C++ projects, Docker dependency caching is "solved" by either installing the dependencies with a system package manager (e.g. `apt`) in a separate Docker layer, or just by downloading and building the dependencies manually in the Dockerfile, which also happens in a separate layer, and should thus be cached by default.
# What makes Cargo different from other build tools?
The pattern above is clear - other build tools can usually make Docker layer caching work quite easily by first copying a manifest/lockfile and then running a command that downloads/compiles the dependencies. This creates a cached layer, which is not invalidated by source code file changes.
There are several properties of Cargo that make Docker layer caching more complicated than it is for other tools.
- Multiple `Cargo.toml` files
- There can be arbitrarily many manifest files that describe dependencies, nested within an arbitrary directory structure. When using Docker, users have to manually copy all these files to the dependency Docker layer, and also reconstruct their exact directory structure.
- In most other described build systems, there is a single root manifest file containing all the dependencies, so it's enough to copy a single file (+ a lockfile) to the Docker image.
- Compiler flags, features and targets
- Cargo needs to know the specific compiler (`rustc`) flags, enabled features and also the target platform (e.g. `Linux GNU x64`) when compiling dependencies. And these properties need to be the same when compiling dependencies and the local crates, in order to reuse the Cargo build cache. This is also one of the reasons why we cannot just compile the dependencies from `Cargo.lock`, because it only says what dependencies are there, but not how they should be compiled.
- Compiler flags can be specified in `config.toml` files (which are arguably not that common), but more importantly also in `Cargo.toml` files (e.g. `[profile.dev]`)! This means that we have to find and copy all these files that can modify compiler flags to the dependency Docker layer.
- Entrypoint files
- Cargo requires some entrypoint file (e.g. `lib.rs` or `main.rs`) to exist on the filesystem before it even attempts to compile (dependencies). There can also be other files referenced by `Cargo.toml`, such as additional binaries or benchmarks.
- This means that users have to create dummy files in the dependency Docker layer, which is annoying, and can lead to `mtime` invalidation problems.
I think that Cargo is a bit unique in this area, because it combines liberal usage of dependencies (because it's so simple to add them) with the complex compilation model of native code that uses compiler flags. This makes it more challenging to perform a cached "dependency only" build, and combined with the fact that Rust crates can be slow to compile, this creates a large pain point for users.
# Solutions
## Existing Solutions
> This part is purely my personal opinion and contains speculative and possibly controversial opinions :)
**To resolve the dependency Docker layer caching problem, we need to be easily able to create a Docker layer that will have enough information to build all dependencies (with the correct compiler flags) in a way that doesn't depend on source files (so that changes in source files won't invalidate the layer).**
I think that any approach that requires the user to reconstruct the `Cargo.toml` directory structure and potentially also create the dummy files, is just not ergonomic enough and will stop working sooner or later in complex workspaces and when edge cases are encountered. It is not scalable in the sense that users might forget about sources of dependencies or compiler flags (e.g. `config.toml`) and when the dependency sources change (e.g. a new `Cargo.toml` is added), the user has to modify the Dockerfile.
I see two potential solutions that seem to work well and aren't unnecessarily complex (unlike e.g. using `sccache`):
1) Use the approach of `cargo-chef`
This does not necessarily mean that Cargo should directly integrate this third-party tool. However, with the following two features:
- Generate a build plan from a Cargo project.
- Perform a build of a Cargo project based on a build plan, while having the option to only compile its dependencies.
Cargo should be able to provide similar functionality that would resolve the Docker layer caching issue. At the same time, these features might be general enough that they could be also useful for other use-cases (some forms of a build plan were discussed [before](https://github.com/rust-lang/cargo/issues/5579)). I *think* that Cargo has some support for generating build plans/unit graphs, but I'm not sure how well is that supported in its current state.
This approach would take the responsibility of specifying or copying the list of dependencies and compiler flags from the user to Cargo.
It would also provide a generic solution - pretty much the same few lines of code could be copy-pasted to a Dockerfile for almost any project and they should "just work" with Docker layer caching. I think that this is a big benefit.
There is some complexity in multi-stage builds, but these are anyway very useful for Rust, because crates tend to produce statically linked binaries that can be simply copied to an empty stage, which reduces the size of the final image. Multi-stage builds are thus quite useful for Rust Docker containers in general, even without taking layer caching into account.
Why not just use `cargo chef`? As noted above, having to install a third-party tool makes this approach less discoverable, complicates the Dockerfile slightly, and I think that it can also make some users reluctant to use this feature, if it's not "official". Integrating this functionality into Cargo could also help iron out some edge cases.
There are multiple ways how this "integration" could be performed. One approach suggested by Luca Palmieri (the author of `cargo chef`) would be e.g. to provide such a Cargo command that would be maintained in-tree (to make it easier to develop/maintain) and possibly also integrated in the official Rust Docker image (to make it more available for end users), however it would not necessarily have to be a part of Cargo itself.
This approach was described in [this](https://github.com/rust-lang/cargo/issues/8362) issue.
2) Use Docker cache mounts
- This has a big advantage that we wouldn't need to modify Cargo in any way, and users could design their Dockerfile in a quite straightforward fashion, just with an added flag when building their project. It also doesn't require a multi-stage build.
- However, this approach also has some quirks, as described above. It uses a Docker build backend which is non-default (Linux), not implemented yet (Windows/macOS) or has to be installed/prepared (CI). It also cannot just be copy pasted into any Dockerfile, the command currently has to be modified based on the location of the Cargo directory. I'm also unsure how to use it in Dockerfiles where Cargo is manually installed, without creating a multi-stage build manually.
## Solution Brainstorming
Considerations
- For a feature to be stablized in cargo, it needs to fit into the cohesive whole, meaning it needs to work without a lot of caveats. It can't be a second tier solution
- Cargo needs to maintain compatibility on stablized features. By merging a feature, we need to make sure the design is solid enough to work now and be open for handling future cases. As part of this, we need to be careful what technologies cargo has direct integration with.
- We need to be congnizant of the maintenance burden this puts on the cargo team
- Cargo is already weighed down with a lot of non-obvious workflows that make it difficult to safely make changes and design features. We need to be cautious of adding new, one off, specialized workflows within cargo
- Potential for influx of bugs
- Potential for people asking for extending the feature for their individual needs
### Build Plans
**TODO:** Finish fleshing this out
Cargo has an unstable [build plan feature](https://doc.rust-lang.org/cargo/reference/unstable.html#build-plan) that reports what commands would be run.
Disadvantages
- This is one of those non-obvious, workflows in cargo that is brittle to support
- The caller has to deal with the recursive nature of the build plan because not all of the information is know a priori so some commands might extend the build plan further
### An "--allow-missing-targets" flag
**TODO:** Finish fleshing this out
This would avoid the need for creating dummy `.rs` files
Disadvantages
- This has the potential for being a non-obvious workflow in cargo that could cause problems
- The manifests still need to be copied over
### Extend `Cargo.lock` to allow building without manifests
**TODO:** Finish fleshing this out
Inspiration: `cargo install` allows you to build an arbitrary package today. Could we do the same to allow users to create their own `--dependencies-only`, especially without the target source being present?
Some build systems have little context needed for their "dependency build" docker layer, just the list of packages to install. For cargo, its a bit more complicated and not all of the information gets recorded in `Cargo.lock`. `Cargo.lock` records the dependency tree but leaves out
- Profile information
- Depedency types (normal, build, dev, and target-platform cfgs)
- Feature flags
If the format was extende to include this information, a user could run
```console
$ cargo build --manifest-path Cargo.lock -p <some-dep>
```
A third party `cargo build-dependencies` could present an interface like `cargo build`, allowing features to be activated, etc, and then use that to identify a `cargo build --manifest-path Cargo.lock` command to run with which top-level packages and their features
Disadvantages
- This would likely increase the risk of merge conflicts in the lock file, something the cargo team has previously worked to reduce
- This would further bloat the lock file
- This requires a third-party command to re-implement the `Cargo.lock` tree walking and feature activtion code, including feature unification