Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cargo build --dependencies-only #2644

Open
nagisa opened this issue May 4, 2016 · 357 comments
Open

cargo build --dependencies-only #2644

nagisa opened this issue May 4, 2016 · 357 comments
Labels
C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` S-needs-design Status: Needs someone to work further on the design for the feature or fix. NOT YET accepted.

Comments

@nagisa
Copy link
Member

nagisa commented May 4, 2016

cargo team notes:

This is primarily targeted at docker layer caching for dependencies. For limiting docker layer caching of workspace members, see #14566.


There should be an option to only build dependencies.

@alexcrichton alexcrichton added the A-configuration Area: cargo config files and env vars label May 4, 2016
@KalitaAlexey
Copy link
Contributor

@nagisa,
Why do you want it?

@nagisa
Copy link
Member Author

nagisa commented Jan 17, 2017

I do not remember exactly why, but I do remember that I ended just running rustc manually.

@KalitaAlexey
Copy link
Contributor

@posborne, @mcarton, @devyn,
You reacted with thumbs up.
Why do you want it?

@mcarton
Copy link
Member

mcarton commented Jan 17, 2017

Sometimes you add a bunch of dependencies to your project, know it will take a while to compile next time you cargo build, but want your computer to do that as you start coding so the next cargo build is actually fast.
But I guess I got here searching for a cargo doc --dependencies-only, which allows you to get the doc of your dependencies while your project does not compile because you'd need the doc to know how exactly to fix that compilation error you've had for a half hour 😄

@gregwebs
Copy link

As described in #3615 this is useful with build to setup a cache of all dependencies.

@alexcrichton
Copy link
Member

@gregwebs out of curiosity do you want to cache compiled dependencies or just downloaded dependencies? Caching compiled dependencies isn't implemented today (but would be with a command such as this) but downloading dependencies is available via cargo fetch.

@gregwebs
Copy link

gregwebs commented Jan 31, 2017

Generally, as with my caching use case, the dependencies change infrequently and it makes sense to cache the compilation of them.

The Haskell tool stack went through all this and they seemed to generally decided to merge things into a single command where possible. For fetch they did end up with something kinda confusing though: build --dry-run --prefetch. For build --dependencies-only mentioned here they do have the same: build --only-dependencies

@alexcrichton
Copy link
Member

@gregwebs ok thanks for the info!

@KalitaAlexey
Copy link
Contributor

@alexcrichton,
It looks like I should continue my work on the PR.
Will Cargo's team accept it?

@alexcrichton
Copy link
Member

@KalitaAlexey I personally wouldn't be convinced just yet, but it'd be good to canvas opinions from others on @rust-lang/tools as well

@KalitaAlexey
Copy link
Contributor

@alexcrichton,
Anyway I have no time right now)

@nrc
Copy link
Member

nrc commented Feb 2, 2017

I don't see much of a use case - you can just do cargo build and ignore the output for the last crate. If you really need to do this (for efficiency) then there is API you can use.

@gregwebs
Copy link

gregwebs commented Feb 4, 2017

What's the API?

@nrc
Copy link
Member

nrc commented Feb 6, 2017

Implement an Executor. That lets you intercept every call to rustc and you can do nothing if it is the last crate.

@gregwebs
Copy link

gregwebs commented Feb 6, 2017

I wasn't able to find any information about an Executor for cargo. Do you have any links to documentation?

@nrc
Copy link
Member

nrc commented Feb 6, 2017

Docs are a little thin, but start here:

/// A glorified callback for executing calls to rustc. Rather than calling rustc
/// directly, we'll use an Executor, giving clients an opportunity to intercept
/// the build calls.

You can look at the RLS for an example of how to use them: https://github.com/rust-lang-nursery/rls/blob/master/src/build.rs#L288

@shepmaster
Copy link
Member

A question of Stack Overflow wanted this feature. In that case, the OP wanted to build the dependencies for a Docker layer.

A similar situation exists for the playground, where I compile all the crates once. In my case, I just put in a dummy lib.rs / main.rs. All the dependencies are built, and the real code is added in the future.

@alexcrichton
Copy link
Member

@shepmaster unfortunately the proposed solution wouldn't satisfy that question because a Cargo.toml won't parse without associated files in src (e.g. src/lib.rs, etc). So that question would still require "dummy files", in which case it wouldn't specifically be serviced by this change.

@lolgesten
Copy link

lolgesten commented Oct 9, 2017

I ended up here because I also am thinking about the Docker case. To do a good docker build I want to:

COPY Cargo.toml Cargo.lock /mything

RUN cargo build-deps --release  # creates a layer that is cached

COPY src /mything/src

RUN cargo build --release       # only rebuild this when src files changes

This means the dependencies would be cached between docker builds as long as Cargo.toml and Cargo.lock doesn't change.

I understand src/lib.rs src/main.rs are needed to do a good build, but maybe build-deps simply builds all the deps.

@ghost
Copy link

ghost commented Oct 9, 2017

The dockerfile template in shepmaster's linked stackoverflow post above SOLVES this problem

I came to this thread because I also wanted the docker image to be cached after building the dependencies. After later resolving this issue, I posted something explaining docker caching, and was informed that the answer was already linked in the stackoverflow post. I made this mistake, someone else made this mistake, it's time to clarify.

RUN cd / && \
    cargo new playground
WORKDIR /playground                      # a new project has a src/main.rs file

ADD Cargo.toml /playground/Cargo.toml 
RUN cargo build                          # DEPENDENCIES ARE BUILD and CACHED
RUN cargo build --release
RUN rm src/*.rs                          # delete dummy src files

# here you add your project src to the docker image

After building, changing only the source and rebuilding starts from the cached image with dependencies already built.

@lolgesten
Copy link

someone needs to relax...

@lolgesten
Copy link

Also @KarlFish what you're proposing is not actually working. If using FROM rust:1.20.0.

  1. cargo new playground fails because it wants USER env variable to be set.
  2. RUN cargo build does not build dependencies for release, but for debug. why do you need that?

@lolgesten
Copy link

lolgesten commented Oct 9, 2017

Here's a better version.

FROM rust:1.20.0

WORKDIR /usr/src

# Create blank project
RUN USER=root cargo new umar

# We want dependencies cached, so copy those first.
COPY Cargo.toml Cargo.lock /usr/src/umar/

WORKDIR /usr/src/umar

# This is a dummy build to get the dependencies cached.
RUN cargo build --release

# Now copy in the rest of the sources
COPY src /usr/src/umar/src/

# This is the actual build.
RUN cargo build --release \
    && mv target/release/umar /bin \
    && rm -rf /usr/src/umar

WORKDIR /

EXPOSE 3000

CMD ["/bin/umar"]

@shepmaster
Copy link
Member

You can always review the complete Dockerfile for the playground.

@maelvls
Copy link

maelvls commented Nov 10, 2017

Hi!
What is the current state of the --deps-only idea? (mainly for dockerization)

@AdrienneCohea
Copy link

I agree that it would be really cool to have a --deps-only option so that we could cache our filesystem layers better in Docker.

I haven't tried replicating this yet, but it looks very promising. This is in glibc and not musl, by the way. My main priority is to get to a build that doesn't take 3-5 minutes ever time, not a 5 MB alpine-based image.

@SamuelMarks
Copy link

613th up-vote

(now that's a famous number)

Would be great to see this. Will give cargo-chef a shot in the meantime.

@lionkor
Copy link

lionkor commented Aug 21, 2024

Hey, could we PLEASE get this?

I tried

  • cargo-chef, but the official cargo-chef docker image has the wrong glibc version which breaks my builds
  • building a new (empty) project with all dependencies, and then loading in the source files, and then building, but docker does not always correctly detect that the files have changed, and I end up deploying hello-world into production
  • caching relevant directories via RUN's --mount=type=cache, but that somehow fails to cache some dependency downloads. Still caches all binary objects. That took way too long to configure and ends up like:
FROM rust:bullseye AS builder

RUN rustup toolchain install nightly

WORKDIR /app

COPY Cargo.toml Cargo.lock rust-toolchain.toml /app/
COPY . /app/

RUN --mount=type=cache,target=/root/.cargo \
    --mount=type=cache,target=/root/.rustup \
    --mount=type=cache,target=/app/target/release/build \
    --mount=type=cache,target=/app/target/release/deps \
    --mount=type=cache,target=/app/target/release/.fingerprint \
    cargo build --release

FROM debian:bullseye-slim

WORKDIR /app

COPY --from=builder /app/target/release/my-app .

ENTRYPOINT [ "./my-app" ]

I would like to be able to just have one line that says "build all my dependencies", and then another line that builds the rest. This would leave docker caching the first one (which hasn't changed), and rebuilding the second. I can then use --mount=type=cache there to probably even cache artifacts of my project, and I'll be happy.

Would be super cool :)

@epage
Copy link
Contributor

epage commented Aug 21, 2024

@lionkor please see the summary

@sdavids
Copy link

sdavids commented Aug 21, 2024

[rant]

This issue has 650 upvotes, 333 comments, and 119 participants.

It is not even on the roadmap under "Big Projects, no backers".

This issue is 8.5 years old.

The cited summary is almost 1.5 years old.

During these years thousands upon thousands of CI hours have been wasted.

The environmental footprint of this waste is enormous (both in compute hours and network traffic).

CI providers are happy because their pockets get filled, yay!

Just one example: Microsoft Sustainability vs. GitHub Actions Pricing Calculator

[/rant]


[rant]

Maybe we can start with a Pareto version:

  • single Cargo.toml and no other shenanigans: tada, it just works™
  • everything else: print "wait another 8.5 years"

[/rant]


So please,

put it on the roadmap and find a corporate backer—saving millions of compute hours would be a nice PR win and very good for the environment.

Thanks.

@vn971
Copy link

vn971 commented Aug 21, 2024

@epage I've read the document at the top of this issue (and the one you've quoted in your last comment), and I have a question.

Currently, if I understand it correctly, one of the disadvantages of the external cargo chef approach is that it's, well, external, and that it requires maintenance (work) to keep up-to-date with pure cargo.

With this in mind, can you please help me in analysis of the following alternative?:
In a multi-stage build like the one described for cargo-chef, can we use the following 3 stages?:

  1. Stage 1, copy everything and run a script that generates the file tree with only Cargo and config.toml files
  2. Stage 2, compile all the dependencies using that file tree.
  3. Stage 3, build the actual project using pre-compiled dependencies.

If this approach is feasible, cargo (or an external tool at first) could get an additional flag that would allow it to "export"/"copy" all its required "build plan dependencies" in a separate directory. This can then be executed at the proposed Stage 1, and later stages 2 and 3 could be more or less standard.

@epage
Copy link
Contributor

epage commented Aug 21, 2024

So far we've mostly been punting this to third-party solutions, like cargo-chef, to explore this design space. If there is a problem with those tools (e.g. wrong glibc version) or you have an idea on how to improve it (like shuffling around the stages), then the place to start is with cargo-chef. If you have another design direction to explore, feel welcome to create your own third-party tool; it would be great to see what we can learn from that!

Please keep in mind:

  1. Our compatibility guarantees mean that if we provide a short-term solution, we'd be stuck with it forever. We'd either need to be confident enough in our future direction to know we can extend it in that direction without breaking people or we'd need to be fine with this dead-end solution. To be fine with a dead-end solution, we'd need to weigh the internal design constraints (ie are we "boxing ourselves in') and other factors.
  2. We are not employed by you and choose our own priorities. Upvotes and comments are not how we make decisions. If this is important to you, then consider whether you can drive an effort like this.

@Kobzol
Copy link
Contributor

Kobzol commented Aug 21, 2024

@lionkor If you have an issue with glibc in cargo-chef, you should be able to simply use a different base image with a different glibc and then install cargo-chef in your Dockerfile manually. Have you tried that?

Btw, for clarity, I would like to reiterate that cargo build --dependencies-only would by itself NOT help with the Docker caching issue, because the core of the issue is the complex nature of Cargo projects and its interaction with Docker layers. And as has been stated above, we cannot really offer half-measures in Cargo, because they need to remain stable forever. Again, you can find the explanation of the various challenges and trade-offs in the summary; note that there are also other valid solutions than just using cargo-chef, e.g. Docker cache mounts.

That doesn't mean that this problem is unsolvable, just that it's way more difficult than just implementing cargo build --dependencies-only. I think that it's likely that in order to solve this in Cargo, we'd essentially need to implement some version of a "cargo docker" command, which might be difficult to justify.

@lionkor
Copy link

lionkor commented Aug 22, 2024

Yes, one solution is to make my own image with cargo-chef in it, which matches the rust, cargo and glibc versions I need exactly, and spend yet another few hours making sure that works reliably, ontop of all the time spent already trying different solutions.

One workaround is literally:

  1. Copy the Cargo.toml into a new project
  2. cargo build --release
  3. Copy the real source files and overwrite the existing new project
  4. cargo build --release

I understand that this does not fit every possible use-case in the current and all future uses of cargo and rust, for all eternity, yes. This is a very specific problem, with a very clear solution (let me build only the dependencies). The only thing this has to do is build only the dependencies.

I don't see how a very simple, clear feature like that has to do with the "complex nature" and "half-measures".

Nobody is asking for a kitchen-sink "please do all the docker stuff for me" command, or a "please guess what im thinking right now" solution, we are just asking for a command that does a bog-standard build, and stops right before building the bin or lib of the current top-level project.

I don't see how that is a difficult, complex, multi-faceted issue. I read the summary, the title of which ("Better support of Docker layer caching in Cargo") is missing the entire point already.

I just want to build the dependencies and not the current project. That's it. The context, reasoning, etc. about docker is irrelevant once you understand that the only feature that is requested here is bailing out of the build early.


I'm not expecting anyone build this feature, it's been made clear that this is too complex to build and that I'm also "holding it wrong". My server just gets real toasty every deployment - luckily the DC has A/C.

Maybe eventually I'll have the time to contribute, I just think it's very silly that this is just being shot down.

@RReverser
Copy link
Contributor

@lionkor This is a great summary. If Cargo ever wants to implement deeper Docker integration via some special cargo-docker command or something like that - great, that might be a big but interesting undertaking - but that should be out of scope of this particular issue.

This issue was merely asking for a way to build only the dependencies as specified in Cargo.toml / Cargo.lock, and it feels like discussions of other tools and behaviour is what caused it to stall with "analysis paralysis" due to an ever-expanding context and list of requirements.

Cargo already has cargo-fetch that acts in a very similar vein - take Cargo.lock, download dependencies. Having a command to build downloaded dependencies seems like a well-scoped and natural next step rather than a "half-measure" for some wider problem.

@detly
Copy link

detly commented Aug 22, 2024

The context, reasoning, etc. about docker is irrelevant once you understand that the only feature that is requested here is bailing out of the build early.

This issue was merely asking for a way to build only the dependencies as specified in Cargo.toml / Cargo.lock, and it feels like discussions of other tools and behaviour is what caused it to stall

Without any context, it's a request for a feature without a goal. Why do that? There's no discernible point to it. If the context is ignored, all that will happen is that a day after cargo getting a new flag, this same group of issues will be reopened on the issue tracker because it doesn't quite have the outcomes everyone expects; AND another set of issues relating to any errors or oversights in the new feature that now has no apparent purpose.

Note that even after saying that context is irrelevant (@lionkor) you jump back to providing context for your frustration (your deployment). I'm not having a go at you for expressing that frustration, but pointing out that without providing context, it would not make much sense to be frustrated by the lack of a feature. (Or maybe I mistook your point, sorry if so.)

I, like many others here, would like to have some way to reduce the build-test loop time. Once I gave up in a huff and actually implemented something for myself, I found that there were so many little details that I simply hadn't known about before. Not only that, but those details would be different for different use cases (even small changes to my own). It took that experience, plus quite a few readings of the devs' comments here, to realise that a --dependencies-only flag would not necessarily have the effect I assumed it would for my use case, nor for many others. At least, not as it's currently pitched.

I just think it's very silly that this is just being shot down.

It's not being shot down though, it's being considered and interrogated.

Yes, one solution is to make my own image with cargo-chef in it

I would definitely suggest trying this, it hasn't solved every issue in every CI pipeline I've had, but just ignoring the pre-built image and incorporating cargo chef into my usual "let's write my own image for this then" moments has been a real boon. It's even been great for local testing experiments, which is where I most often quickly roll my own Dockerfile with local bind mounts etc. etc. just to reproduce and weed out bugs before pushing anything for review.

@lionkor
Copy link

lionkor commented Aug 22, 2024

Without any context, it's a request for a feature without a goal. Why do that?

I think I didn't make entirely clear what I meant:

The context is relevant for everyone to understand whether there is a problem, and why this is a solution to that problem.

Simply saying "a lot of other tools have a --only-deps or --deps-only or --only-build-deps" would be less useful, because why implement it just because other similar tools have it?

Instead, the context of a docker build is relatable and easy to understand, and so that is used to illustrate the problem. It's also useful in case the flag already exists, maybe its just called "--foo-bar" instead. But it doesn't exist in the software, in this case.

The engineer's task is to extract a solution from the problem statement, without encoding the problem in the solution (to avoid building something only useful for this one problem). In that vein, we dont use the words from the problem statement, such as "docker", "caching", "CI/CD" and instead we get to the core problem and solve that. Such a solution includes words like "building" and "dependencies", but not "docker".

The context is relevant to understanding whether this is a problem, whether there is a built in solution, to evaluate how common/urgent this may be, and to evaluate the impact of building such a feature. So, its relevant generally, but its not relevant to the solution itself.

It's not being shot down though, it's being considered and interrogated.

The suggested solution not only encodes the problem in the solution, but that "solution" given also is only a third party workaround, at best, which might work for people with enough time, but isn't a good solution for such a widely requested (and therefore arguably important) feature.

Bringing it back to the solution here, now: If we could agree that there are use cases for building only the dependencies, just like there are use cases for downloading the dependencies (cargo fetch), etc., then we could realistically post a single comment saying "Here are the acceptance criteria, does anyone wanna build this?", and that would constitute "not shooting it down". (Or, whatever equivalent of that process in this repo, I'm not intimately familiar with the process)

@polarathene
Copy link

polarathene commented Aug 22, 2024

It took that experience, plus quite a few readings of the devs' comments here, to realise that a --dependencies-only flag would not necessarily have the effect I assumed it would for my use case, nor for many others. At least, not as it's currently pitched.

💯agree

I go into some of the why below, but I've encountered my own share of:

  • Why did this cause deps to be rebuilt when nothing changed
  • Why did this cause the build to take so long even though I made no changes between the build commands?

  • caching relevant directories via RUN's --mount=type=cache, but that somehow fails to cache some dependency downloads. Still caches all binary objects. That took way too long to configure

For Docker, you can get that sort of output via docker init and choose Rust. It'll generate a template rust project for you with a Dockerfile and compose.yaml, you provide the actual rust project (or copy those files to it).

There are some concerns with cache mounts, but it really depends on context. Notably with a typical CI workflow:

  • Git checkouts do not preserve mtime attribute on source files, and cargo relies on this for leveraging existing cache. There are workarounds, but this is not Docker specific.
  • Docker builds output a layer cache, but cache mounts are a separate cache storage. For Github Actions as a CI there is a separate action for exporting / importing this.

Neither of those concerns would be something you'd experience locally with a single git checkout and iterative builds, so they're not immediately apparent.

  • There is a separate issue for the cargo cache concern with mtime. You can workaround with a separate cache like sccache, but note it has it's own caveats with cache too (covered in their docs).
  • The Docker cache issue is an obvious one and not really docker specific, if your cache isn't retained across CI runs, obviously you have nothing to cache.

Personally I'm not a fan of cargo-chef. You can leverage cache mounts correctly for a shared cache, and project specific build cache.

If the problem only occurs in CI for you, you can also do the build outside of Docker, or do the build via a container before adding to a Dockerfile. Usually the motivation for using Docker is the more controlled build environment, rather than the cache/build-time, you still get that via build container + Dockerfile process, and for other needs you can use cargo-zigbuild.

You may also be interested in trying Earthly (Uses BuildKit, you still get an image that Docker can use):


Bringing it back to the solution here, now: If we could agree that there are use cases for building only the dependencies, just like there are use cases for downloading the dependencies (cargo fetch)

Given the above, is it really a need for building dependencies separately, or is it an XY problem? (your problem is due to another underlying issue that'd still affect you)

Downloading dependencies I can understand. But if the pre-built deps was beneficial, you could demonstrate that in a CI workflow that would simulate the above concerns with Docker and Cargo cache behaviours aren't getting in the way? If you've only verified such from external tools like cargo-chef (which IIRC workaround the issue), that doesn't equate to such a feature making that tooling obsolete.

I am late to this discussion, so perhaps that's been covered somewhere above already, but if no one has verified such and there is no reason why it shouldn't be possible to verify, then that would be a good first step. Otherwise you're trying to resolve the wrong concern.

@Kobzol
Copy link
Contributor

Kobzol commented Aug 22, 2024

This issue was merely asking for a way to build only the dependencies as specified in Cargo.toml / Cargo.lock, and it feels like discussions of other tools and behaviour is what caused it to stall with "analysis paralysis" due to an ever-expanding context and list of requirements.
Cargo already has cargo-fetch that acts in a very similar vein - take Cargo.lock, download dependencies. Having a command to build downloaded dependencies seems like a well-scoped and natural next step rather than a "half-measure" for some wider problem.

It definitely does seem like so, but it's unclear whether it really is that simple. I think that it's a general theme of this issue; people are frustrated because it seems like cargo refuses to implement a trivial thing, but the reality is that the thing is not as trivial as it might seem.

One thing that we could explore is the possibility of doing a cargo build that would completely ignore local Rust files. In that case, users would still need to copy all Cargo.toml files of a workspace and also the Cargo.lock and .cargo/config.toml files manually to a Docker layer for it to work with Docker layer caching, but at least they would not have to deal with copying src/lib.rs and friends, setting them to an empty file, dealing with mtime invalidation issues etc. However, even this seemingly simple thing has some unresolved questions, like what to do about { path = ... } dependencies (are those local or not?), and also how difficult it would be to implement in Cargo (I seem to remember that the resolving of local Rust targets was done relatively early in the pipeline and it might require non-trivial rearchitecturing to ignore these files, but I might be wrong). In any case, as with everything in Rust, it will need someone to drive this effort by communicating with the Cargo team, finding out what design axes there are, proposing some design for this, perhaps implementing it, etc. We can discuss this more on Zulip, as this issue is watched by a lot of people and it might be quite spammy to deal with that here.

@RReverser
Copy link
Contributor

RReverser commented Aug 22, 2024

One thing that we could explore is the possibility of doing a cargo build that would completely ignore local Rust files.

That's exactly the original ask (at least for me) and would be perfect. Assuming that "ignore local Rust files" also means "ignore absence of local Rust files", so that we can really copy Cargo.toml + Cargo.lock + other relevant configs but not bother creating dummy sources.

However, even this seemingly simple thing has some unresolved questions, like what to do about { path = ... } dependencies (are those local or not?)

I'd suggest to mirror behaviour of cargo-fetch as closely as possible to, again, make this command and feature request well-scoped. If it's not downloaded by cargo-fetch, it doesn't need to be built, and vice versa.

@fenollp
Copy link

fenollp commented Aug 22, 2024

Hi! Thanks for the amazing summary document! I'm working on a thing that tries to maximize Docker build cache usage and I believe this approach may be of interest here.

It's a $RUSTC_WRAPPER that turns rustc calls into docker buildx build -o=./... ... calls (that run rustc). At most one buildx call per rustc invocation. It's very much WIP and quite not ready yet but it gets there for simple projects today.
It generates Dockerfiles with MANY stages mapping to crate build calls, reusing them as we progress inside cargo's build tree. Here's a sample of such huge Dockerfiles:

# syntax=docker.io/docker/dockerfile:1@sha256:fe40cf4e92cd0c467be2cfc30657a680ae2398318afd50b0c80585784c604f28
# Generated by https://github.com/fenollp/supergreen version 0.8.0
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.80.1-slim@sha256:e8e40c50bfb54c0a76218f480cc69783b908430de87b59619c1dca847fdbd753 AS rust-base

FROM scratch AS cratesio-anstyle-1.0.1-index.crates.io-6f17d22bba15001f
ADD --chmod=0664 --checksum=sha256:3a30da5c5f2d5e72842e00bcb57657162cdabef0931f40e2deb9b4140440cecd \
  https://static.crates.io/crates/anstyle/anstyle-1.0.1.crate /crate
SHELL ["/usr/bin/dash", "-c"]
RUN \
  --mount=from=rust-base,src=/lib,dst=/lib \
  --mount=from=rust-base,src=/lib64,dst=/lib64 \
  --mount=from=rust-base,src=/usr,dst=/usr \
    set -eux \
 && mkdir /extracted \
 && tar zxf /crate --strip-components=1 -C /extracted
FROM rust-base AS dep-lib-anstyle-1.0.1-dfef56a61f24fbad-index.crates.io-6f17d22bba15001f
WORKDIR /tmp/clis-buildxargs_master/release/deps
ENV \
  CARGO_CRATE_NAME="anstyle" \
  CARGO_MANIFEST_DIR="/home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/anstyle-1.0.1" \
  CARGO_PKG_AUTHORS= \
  CARGO_PKG_DESCRIPTION="ANSI text styling" \
  CARGO_PKG_HOMEPAGE="https://github.com/rust-cli/anstyle" \
  CARGO_PKG_LICENSE="MIT OR Apache-2.0" \
  CARGO_PKG_LICENSE_FILE= \
  CARGO_PKG_NAME="anstyle" \
  CARGO_PKG_README="README.md" \
  CARGO_PKG_REPOSITORY="https://github.com/rust-cli/anstyle.git" \
  CARGO_PKG_RUST_VERSION="1.64.0" \
  CARGO_PKG_VERSION="1.0.1" \
  CARGO_PKG_VERSION_MAJOR="1" \
  CARGO_PKG_VERSION_MINOR="0" \
  CARGO_PKG_VERSION_PATCH="1" \
  CARGO_PKG_VERSION_PRE= \
  TERM="tmux-256color" \
  RUSTCBUILDX=1
RUN \
  --mount=type=bind,from=cratesio-anstyle-1.0.1-index.crates.io-6f17d22bba15001f,source=/extracted,target=/home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/anstyle-1.0.1 \
    set -eux \
 && export CARGO="$(which cargo)" \
 && /bin/bash -c "rustc '--crate-name' 'anstyle' '--edition' '2021' '--error-format' 'json' '--json' 'diagnostic-rendered-ansi,artifacts,future-incompat' '--diagnostic-width' '254' '--crate-type' 'lib' '--emit' 'dep-info,metadata,link' '-C' 'opt-level=3' '-C' 'embed-bitcode=no' '--cfg' 'feature=\"default\"' '--cfg' 'feature=\"std\"' '--check-cfg' 'cfg(docsrs)' '--check-cfg' 'cfg(feature, values(\"default\", \"std\"))' '-C' 'metadata=dfef56a61f24fbad' '-C' 'extra-filename=-dfef56a61f24fbad' '--out-dir' '/tmp/clis-buildxargs_master/release/deps' '-C' 'strip=debuginfo' '-L' 'dependency=/tmp/clis-buildxargs_master/release/deps' '--cap-lints' 'warn' '-A' 'clippy::assigning_clones' '-A' 'clippy::blocks_in_conditions' '-W' 'clippy::cast_lossless' '-W' 'clippy::redundant_closure_for_method_calls' '-W' 'clippy::str_to_string' '-C' 'overflow-checks=true' /home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/anstyle-1.0.1/src/lib.rs \
      1> >(sed 's/^/::STDOUT:: /') \
      2> >(sed 's/^/::STDERR:: /' >&2)"
FROM scratch AS out-dfef56a61f24fbad
COPY --from=dep-lib-anstyle-1.0.1-dfef56a61f24fbad-index.crates.io-6f17d22bba15001f /tmp/clis-buildxargs_master/release/deps/*-dfef56a61f24fbad* /

FROM scratch AS cratesio-utf8parse-0.2.1-index.crates.io-6f17d22bba15001f
ADD --chmod=0664 --checksum=sha256:711b9620af191e0cdc7468a8d14e709c3dcdb115b36f838e601583af800a370a \
  https://static.crates.io/crates/utf8parse/utf8parse-0.2.1.crate /crate
SHELL ["/usr/bin/dash", "-c"]
RUN \
  --mount=from=rust-base,src=/lib,dst=/lib \
  --mount=from=rust-base,src=/lib64,dst=/lib64 \
  --mount=from=rust-base,src=/usr,dst=/usr \
    set -eux \
 && mkdir /extracted \
 && tar zxf /crate --strip-components=1 -C /extracted
FROM rust-base AS dep-lib-utf8parse-0.2.1-1f8b17e9e43ce6f1-index.crates.io-6f17d22bba15001f
WORKDIR /tmp/clis-buildxargs_master/release/deps
ENV \
  CARGO_CRATE_NAME="utf8parse" \
  CARGO_MANIFEST_DIR="/home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/utf8parse-0.2.1" \
  CARGO_PKG_AUTHORS="Joe Wilm <[email protected]>:Christian Duerr <[email protected]>" \
  CARGO_PKG_DESCRIPTION="Table-driven UTF-8 parser" \
  CARGO_PKG_HOMEPAGE= \
  CARGO_PKG_LICENSE="Apache-2.0 OR MIT" \
  CARGO_PKG_LICENSE_FILE= \
  CARGO_PKG_NAME="utf8parse" \
  CARGO_PKG_README= \
  CARGO_PKG_REPOSITORY="https://github.com/alacritty/vte" \
  CARGO_PKG_RUST_VERSION= \
  CARGO_PKG_VERSION="0.2.1" \
  CARGO_PKG_VERSION_MAJOR="0" \
  CARGO_PKG_VERSION_MINOR="2" \
  CARGO_PKG_VERSION_PATCH="1" \
  CARGO_PKG_VERSION_PRE= \
  TERM="tmux-256color" \
  RUSTCBUILDX=1
RUN \
  --mount=type=bind,from=cratesio-utf8parse-0.2.1-index.crates.io-6f17d22bba15001f,source=/extracted,target=/home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/utf8parse-0.2.1 \
    set -eux \
 && export CARGO="$(which cargo)" \
 && /bin/bash -c "rustc '--crate-name' 'utf8parse' '--edition' '2018' '--error-format' 'json' '--json' 'diagnostic-rendered-ansi,artifacts,future-incompat' '--diagnostic-width' '254' '--crate-type' 'lib' '--emit' 'dep-info,metadata,link' '-C' 'opt-level=3' '-C' 'embed-bitcode=no' '--cfg' 'feature=\"default\"' '--check-cfg' 'cfg(docsrs)' '--check-cfg' 'cfg(feature, values(\"default\", \"nightly\"))' '-C' 'metadata=1f8b17e9e43ce6f1' '-C' 'extra-filename=-1f8b17e9e43ce6f1' '--out-dir' '/tmp/clis-buildxargs_master/release/deps' '-C' 'strip=debuginfo' '-L' 'dependency=/tmp/clis-buildxargs_master/release/deps' '--cap-lints' 'warn' '-A' 'clippy::assigning_clones' '-A' 'clippy::blocks_in_conditions' '-W' 'clippy::cast_lossless' '-W' 'clippy::redundant_closure_for_method_calls' '-W' 'clippy::str_to_string' '-C' 'overflow-checks=true' /home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/utf8parse-0.2.1/src/lib.rs \
      1> >(sed 's/^/::STDOUT:: /') \
      2> >(sed 's/^/::STDERR:: /' >&2)"
FROM scratch AS out-1f8b17e9e43ce6f1
COPY --from=dep-lib-utf8parse-0.2.1-1f8b17e9e43ce6f1-index.crates.io-6f17d22bba15001f /tmp/clis-buildxargs_master/release/deps/*-1f8b17e9e43ce6f1* /

FROM scratch AS cratesio-anstyle-parse-0.2.1-index.crates.io-6f17d22bba15001f
ADD --chmod=0664 --checksum=sha256:938874ff5980b03a87c5524b3ae5b59cf99b1d6bc836848df7bc5ada9643c333 \
  https://static.crates.io/crates/anstyle-parse/anstyle-parse-0.2.1.crate /crate
SHELL ["/usr/bin/dash", "-c"]
RUN \
  --mount=from=rust-base,src=/lib,dst=/lib \
  --mount=from=rust-base,src=/lib64,dst=/lib64 \
  --mount=from=rust-base,src=/usr,dst=/usr \
    set -eux \
 && mkdir /extracted \
 && tar zxf /crate --strip-components=1 -C /extracted
FROM rust-base AS dep-lib-anstyle-parse-0.2.1-32683c410c273145-index.crates.io-6f17d22bba15001f
WORKDIR /tmp/clis-buildxargs_master/release/deps
ENV \
  CARGO_CRATE_NAME="anstyle_parse" \
  CARGO_MANIFEST_DIR="/home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/anstyle-parse-0.2.1" \
  CARGO_PKG_AUTHORS= \
  CARGO_PKG_DESCRIPTION="Parse ANSI Style Escapes" \
  CARGO_PKG_HOMEPAGE="https://github.com/rust-cli/anstyle" \
  CARGO_PKG_LICENSE="MIT OR Apache-2.0" \
  CARGO_PKG_LICENSE_FILE= \
  CARGO_PKG_NAME="anstyle-parse" \
  CARGO_PKG_README="README.md" \
  CARGO_PKG_REPOSITORY="https://github.com/rust-cli/anstyle.git" \
  CARGO_PKG_RUST_VERSION="1.64.0" \
  CARGO_PKG_VERSION="0.2.1" \
  CARGO_PKG_VERSION_MAJOR="0" \
  CARGO_PKG_VERSION_MINOR="2" \
  CARGO_PKG_VERSION_PATCH="1" \
  CARGO_PKG_VERSION_PRE= \
  TERM="tmux-256color" \
  RUSTCBUILDX=1
RUN \
  --mount=type=bind,from=cratesio-anstyle-parse-0.2.1-index.crates.io-6f17d22bba15001f,source=/extracted,target=/home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/anstyle-parse-0.2.1 \
  --mount=type=bind,from=out-1f8b17e9e43ce6f1,target=/tmp/clis-buildxargs_master/release/deps/libutf8parse-1f8b17e9e43ce6f1.rmeta,source=/libutf8parse-1f8b17e9e43ce6f1.rmeta \
    set -eux \
 && export CARGO="$(which cargo)" \
 && /bin/bash -c "rustc '--crate-name' 'anstyle_parse' '--edition' '2021' '--error-format' 'json' '--json' 'diagnostic-rendered-ansi,artifacts,future-incompat' '--diagnostic-width' '254' '--crate-type' 'lib' '--emit' 'dep-info,metadata,link' '-C' 'opt-level=3' '-C' 'embed-bitcode=no' '--cfg' 'feature=\"default\"' '--cfg' 'feature=\"utf8\"' '--check-cfg' 'cfg(docsrs)' '--check-cfg' 'cfg(feature, values(\"core\", \"default\", \"utf8\"))' '-C' 'metadata=32683c410c273145' '-C' 'extra-filename=-32683c410c273145' '--out-dir' '/tmp/clis-buildxargs_master/release/deps' '-C' 'strip=debuginfo' '-L' 'dependency=/tmp/clis-buildxargs_master/release/deps' '--extern' 'utf8parse=/tmp/clis-buildxargs_master/release/deps/libutf8parse-1f8b17e9e43ce6f1.rmeta' '--cap-lints' 'warn' '-A' 'clippy::assigning_clones' '-A' 'clippy::blocks_in_conditions' '-W' 'clippy::cast_lossless' '-W' 'clippy::redundant_closure_for_method_calls' '-W' 'clippy::str_to_string' '-C' 'overflow-checks=true' /home/pete/.cargo/registry/src/index.crates.io-6f17d22bba15001f/anstyle-parse-0.2.1/src/lib.rs \
      1> >(sed 's/^/::STDOUT:: /') \
      2> >(sed 's/^/::STDERR:: /' >&2)"
FROM scratch AS out-32683c410c273145
COPY --from=dep-lib-anstyle-parse-0.2.1-32683c410c273145-index.crates.io-6f17d22bba15001f /tmp/clis-buildxargs_master/release/deps/*-32683c410c273145* /

So each new buildx has everything it needs to compile the current crate and its deps. And since its deps were just buildx-ed they're in the cache, artifacts get fully reused.

My point is that a full build with this tool gives you a final (huge) Dockerfile with all deps as stages. So maybe one could just strip out the last stages (the ones about local code) and use this as the base stages for a docker image.

@zkkv
Copy link

zkkv commented Aug 29, 2024

Would be nice to have this implemented, a la npm install.

@wdoekes
Copy link

wdoekes commented Aug 29, 2024

@zkkv @Brayan-724 : while I appreciate your enthusiasm, your comments are akin to a "me too". Please try to keep the noise down and instead use the emoji to express interest. Thanks!

@gustavovalverde
Copy link

gustavovalverde commented Sep 2, 2024

Let me share our experiences and insights, leveraging from this comment #2644 (comment).

We've been using cargo-chef for a while, but we weren't fully satisfied with it. It doesn't support the --lock flag, which we rely on during builds, leading to unnecessary rebuilding of some dependencies. Additionally, it has a known issue that we reported here:

We also experimented with rsync and some custom approaches to improve caching, but ultimately, we preferred to avoid the clutter and confusion these methods added to our Dockerfile, as this one, which became difficult to read and understand.

This led us to explore cache mounts, despite being aware of the mtime issue. We anticipated rebuilds when our CI copied files into the Dockerfile, but just re-running the build skips that step, reducing the build time to 3-5 seconds.

As @polarathene mentioned, we started with docker init, but it proved too simple for our project, which includes several Cargo.toml files across different crates. We had to map each one, and since we use a multistage Dockerfile for both test and prod builds with different flags, we had to repeat the process twice (which might not be necessary for most projects). Here’s the PR we’ve been testing:

After implementing these changes, we've seen a ~36% improvement in build times. We no longer face the rebuild issues associated with cargo-chef, and we've eliminated the need for third-party tools. Once the mtime issue is resolved, which is in progress, we expect build times to improve even further:

Although our Dockerfile from the aforementioned PR should provide a good starting point for more complex projects, I'll also create a post with two simpler examples for those who want to copy/paste and modify a working example.

@matt-allan
Copy link

matt-allan commented Sep 7, 2024

Hi, I have been dealing with this lately and saw that the summary doc asked for feedback on the workarounds, so here you go:

Cargo chef

I haven’t actually tried this, because I looked at the issues and saw that certain Cargo features we are using were not supported (patches).

From looking at the code it takes on a lot of responsibility from Cargo and seems likely to be fragile.

Docker Cache mounts

The problem with docker cache mounts is they cannot be exported or imported. If you are not using the same machine for each build they are effectively useless.

Normally on GitHub Actions, AWS CodeBuild, etc I will export a cache and import it in subsequent builds. Even with mode set to max that only includes layers. It cannot include the cache mounts.

It might be possible to find the docker cache mount on the host and save that to a cache, but it’s not really a supported Docker API and it would require setting up additional storage.

If the other workarounds don’t work I will probably setup a self hosted runner with a persistent cache.

Copy skeleton / two step build

I tried this. It’s a workspace with too many crates to copy by hand, so I wrote a tool to generate the skeleton from cargo metadata as a tar file.

I am currently struggling with mtime issues, but not the ones I expected.

When I generate the tar I set the mtime of the stub files to 1970, so they are older than the actual files. Later I copy in the actual files with their original timestamps.

I thought that since the mtime of the source file was newer than the mtime when it was last compiled, Cargo would rebuild it. But Cargo doesn’t check that. Instead it checks the dep-info file, which was built from the stub. And since the actual source preserved its timestamps when docker copied it in, it has not been updated since the last build according to the mtime.

I am going to try to work around this by adding a dependency to the stub that cannot ever appear in the real unit to force Cargo to invalidate the build.

It’s worth noting that if there was a cargo build —deps-only flag (that literally only did that) it would solve this problem. Compiling the local crates with the stubs when I really only want to compile the dependencies is the root cause of this mtime issue.

The stub approach is nice in that I can invoke the build normally and everything supported by Cargo just works. But it is unfortunate that any issue with mtime comparison results in you “shipping hello world to production”, as someone else said.

@Kobzol
Copy link
Contributor

Kobzol commented Sep 7, 2024

We have recently discussed this a bit more on Zulip. Here is a summary of that discussion so far:

To reiterate, the most basic form of better supporting Rust (re)builds in the context of Docker layers (as it currently stands, without doing some alternatives like allowing build with only the Cargo.lock, which is currently not possible in Cargo) consists of doing two things:

  1. Copy all Cargo.toml, Cargo.lock and config.toml (Perhaps some other files? But these are definitely the most important) to the Docker image.
  2. Copy all "entrypoint" files (like src/lib.rs) to the Docker image, and reset/clear their contents.

I would argue that 1) is mostly a limitation of Docker, because it simply does not support COPY with a glob pattern. To resolve 1), you either need to explicitly enumerate all your Cargo.toml/etc. files in Dockerfile (which I think is actually a good enough solution for almost all projects), or you need to generate the file list dynamically and use a multi-stage build (which is exactly what cargo-chef does for you). I don't think that there's a lot that Cargo could do here, unless it would learn how to build only from the Cargo.lock file. Even if it implemented what cargo-chef does, it would force users to use a multi-stage build (which is probably a good idea to use for most Rust projects anyway, but it does not remove the complexity).

The 2) problem is more annoying, because the entrypoints are a bit more tricky to enumerate/detect and primarily because you need to get rid of their contents, and then you start running into mtime issues. Here Cargo could indeed help in theory, if it simply tried to compile the project without caring about library/binary/... targets at all (and thus it would ignore the fact that their entrypoints are missing on disk). We have discussed this and it is probably possible, but it would (probably) require a non-trivial refactoring of Cargo to make this work, because the target discovery is now performed very soon in the Cargo buld process. I suppose that if someone wanted to push this forward, having an implementation proof-of-concept of this approach could help the discussion going forward.

Just a remark, building out of Cargo.lock might look like an appealing alternative, but there are some unresolved design issues. From Cargo.lock, we don't know the set of available features, nor the valid targes that can be built. And there are even future features, such as required features, where you could actually enable features (and thus affect which dependencies are built) from e.g. a binary target specification, which is again stored in Cargo.toml files.

@matt-allan
Copy link

I am able to do 1 & 2 in user land pretty easily be using cargo metadata. It isn’t critical to me that Cargo solves those problems officially.

What would really help me is what the issue originally asked for: a cargo build —deps-only flag.

The ability to only compile deps avoids the mtime issue completely and eliminates the risk of shipping a broken binary.

I am working around the lack of that flag by manually building the list of deps from cargo metadata, then running cargo build -p $pkgid for each one.

Since it is possible to do that it seems like it wouldn’t be impossible to implement the aforementioned flag in Cargo. WDYT?

@Kobzol
Copy link
Contributor

Kobzol commented Sep 7, 2024

As I described above, possibly the simplest implementation of --deps-only that would also resolve issue 2) is completely ignoring entrypoint files on disk. This could indeed help unblock most of the annoying issues with Docker layer caching. It does not matter whether you copy the entrypoint files or not though, this approach solves both copying of entrypoint files and the mtime issues. In other words, there is nothing simpler about "just solving mtime"; the solution that I described in my comment above would have to be used to solve 2) completely.

Note that if you're already manually copying all TOML and also .rs entrypoint files and resetting them (which is not a great general solution for most people), then using a different fingerprinting mechanism than mtime might be an even simpler solution for your use-case. You can observe the progress of that here.

@mcronce
Copy link

mcronce commented Sep 7, 2024

@matt-allan in case you haven't tried it, the workaround I use for the mtime problem (in projects where I'm not employing cargo chef) is to touch each crate's lib.rs and the target binary's entrypoint file prior to running cargo build in the layer after I copy in the actual src directory/directories, e.g. in a non-workspace project

COPY Cargo.toml Cargo.lock /repo/
WORKDIR /repo
RUN \
        mkdir -v src && \
        echo 'fn main() {}' > src/main.rs && \
        cargo build --release && \
        rm -Rvf src

COPY src /repo/src
RUN \
        touch src/main.rs && \
        cargo build --release

It's more boilerplate in a workspace, but if you're already generating a tarball for the skeleton files, I'd imagine generating this wouldn't be too much additional pain and it'd be a solution to your problem using tools available today

@matt-allan
Copy link

Ok I think I misunderstood you. It sounded like ignoring the entrypoint files was not simple (“ require a non-trivial refactoring of Cargo to make this work”).

What I was proposing is a mechanism similar to -p filtering, but where Cargo filtered the packages for you to only include non-local packages. That sounded like it would be simpler as it didn’t require ignoring entrypoints.

@sdavids
Copy link

sdavids commented Sep 7, 2024

  1. Copy all Cargo.toml, Cargo.lock and config.toml (Perhaps some other files? But these are definitely the most important) to the Docker image.

This is common to all build systems: You need to COPY all the build and build config files.

So this not an issue with regards to Cargo.

One related question would be if

https://doc.rust-lang.org/cargo/commands/cargo-build.html
https://doc.rust-lang.org/cargo/reference/config.html

are clear enough:

What files (environment variables, flags, stuff gleaned from the operating system, etc.) are relevant for cargo build and in which order are they applied/merged/interpreted?


) limitation of Docker […] does not support COPY with a glob pattern

--parents will make this obsolete:

$ mkdir /tmp/test && cd "$_"
$ cat << 'EOF' >Cargo.toml
[workspace]
resolver = "2"
EOF
$ cargo new -q one
$ cargo new -q two
$ cargo new -q dir/three
$ cargo new -q dir/sub/four
$ cargo -q build
$ cargo -q clean
$ tree --noreport /tmp/test
/tmp/test
├── Cargo.lock
├── Cargo.toml
├── dir
│   ├── sub
│   │   └── four
│   │       ├── Cargo.toml
│   │       └── src
│   │           └── main.rs
│   └── three
│       ├── Cargo.toml
│       └── src
│           └── main.rs
├── one
│   ├── Cargo.toml
│   └── src
│       └── main.rs
└── two
    ├── Cargo.toml
    └── src
        └── main.rs
$ cat << 'EOF' >Dockerfile
# syntax=docker/dockerfile:1.7-labs
FROM busybox

WORKDIR /tmp/test

COPY --parents **/Cargo.toml ./

COPY Cargo.lock ./

COPY --parents **/src ./
EOF
$ docker build -q -t test .
$ docker run --rm test tree /tmp/test
/tmp/test
├── Cargo.lock
├── Cargo.toml
├── dir
│   ├── sub
│   │   └── four
│   │       ├── Cargo.toml
│   │       └── src
│   │           └── main.rs
│   └── three
│       ├── Cargo.toml
│       └── src
│           └── main.rs
├── one
│   ├── Cargo.toml
│   └── src
│       └── main.rs
└── two
    ├── Cargo.toml
    └── src
        └── main.rs

Being explicit might be better from a security/reproducibility standpoint though …


So this would be the want (in relation to Docker) articulated in this GitHub issue:

$ cat << 'EOF' >Dockerfile
# syntax=docker/dockerfile:1.7-labs
FROM rust

WORKDIR /tmp/test

COPY --parents Cargo.lock **/Cargo.toml ./

RUN cargo build --release --deps-only

COPY --parents **/src ./

RUN cargo build --release
EOF

@Kobzol
Copy link
Contributor

Kobzol commented Sep 7, 2024

This is common to all build systems: You need to COPY all the build and build config files.

Sure, but in Python, JavaScript, Ruby, etc., you usually have a single lockfile and a single package file. In Rust, you have a lockfile and a bunch of TOML files that can be almost arbitrarily nested. So it is a bit more annoying for Rust than for other similar tools.

--parents will make this obsolete:

Cool, didn't know about that one! Thanks for bringing it up. Hopefully it will be stabilized soon, it would make it a bit easier for Rust projects to copy their manifests.

So this would be the want (in relation to Docker) articulated in this GitHub issue:

Indeed, a cargo feature that would ignore Rust source entrypoints on disk could most likely work like this.

@ruffsl
Copy link

ruffsl commented Sep 9, 2024

The problem with docker cache mounts is they cannot be exported or imported. If you are not using the same machine for each build they are effectively useless.

Normally on GitHub Actions, AWS CodeBuild, etc I will export a cache and import it in subsequent builds. Even with mode set to max that only includes layers. It cannot include the cache mounts.

It might be possible to find the docker cache mount on the host and save that to a cache, but it’s not really a supported Docker API and it would require setting up additional storage.

@matt-allan , while perhaps not a complete solution to the ticket here or the buildkit one you linked to above, there are several GitHub actions for caching buildkit's internal cache mounts. E.g an actively maintained varent I've used prior:

I'd still like to see a solution to moby/buildkit#1512 eventually as well, but perhaps out of scope for the discussion here.

@SanderVocke
Copy link

There is a lot to read in this thread and I guess someone must have mentioned this previously but I haven't found it.

For me, the reason to want the originally requested change is so that I can build dependencies non-verbose, then build my own crates verbose. That saves a lot of unnecessary log output in CI builds.

Currently there is no way to apply -v / -vv flags to workspace crates only, nor is there a straightforward way to build the dependencies only.

Cleaning and rebuilding my crates easily takes 5-10 minutes of CI build time, so that is not an attractive option for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` S-needs-design Status: Needs someone to work further on the design for the feature or fix. NOT YET accepted.
Projects
None yet
Development

No branches or pull requests