Skip to content

Avoid cargo throttling system with too many tasks on slower CPUs #8556

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
clarfonthey opened this issue Jul 28, 2020 · 4 comments
Closed

Avoid cargo throttling system with too many tasks on slower CPUs #8556

clarfonthey opened this issue Jul 28, 2020 · 4 comments
Labels
A-jobserver Area: jobserver, concurrency, parallelism C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` Performance Gotta go fast!

Comments

@clarfonthey
Copy link

Describe the problem you are trying to solve

I develop on a Pinebook Pro. Sometimes, builds on cargo can seriously freeze up my computer because it diligently spawns as many parallel jobs as it can even though this may not be desired. I have a few ideas for solutions to this, but figured I'd at least start the discussion as I think this is a reasonable thing to consider.

Right now, the number of compilation tasks is determined purely by the number of cores available on the CPU. It would be nice if this were altered in some cases to avoid completely throttling the system, potentially with an easy setting. While it is possible to configure the number of concurrent jobs with build.jobs, this is not always ideal as the number of jobs depends on system load and may also not be as easy for a user to determine.

Describe the solution you'd like

Option 1: Cargo could occasionally check the system load (either CPU percentage or load average) during compilation and scale down the number of jobs if the system load reaches a certain threshold. This can also safeguard against concurrent cargo compilations by ensuring that two cargo instances don't spawn twice the number of threads that can be safely handled.

Option 2: Potentially, cargo could keep track of a global lock on the number of jobs running running, and use that when determining the number of jobs to schedule per instance.

Option 3: Similarly to option 1, cargo could also alter the scheduling priority of individual tasks to ensure that other applications don't grind to a halt.

Notes

The solutions here may not always be platform-independent. For example, I know that Linux (and other *nix varieties) offer way more tools for determining system load than Windows, and it may be that we can't do something reasonable on every platform.

Right now, the workaround for this is to manually set build.jobs and/or set the scheduler priority of cargo on the command-line. It would be nice to have more built-in support for this, though.

This problem is also not unique to cargo, although it is somewhat unique as cargo is designed to multithread as much as possible whereas traditionally compilers were spawned in parallel by IDEs or makefiles.

@clarfonthey clarfonthey added the C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` label Jul 28, 2020
@ehuss ehuss added the Performance Gotta go fast! label Jul 29, 2020
@ehuss ehuss added the A-jobserver Area: jobserver, concurrency, parallelism label Mar 2, 2021
@weihanglo
Copy link
Member

For option 1, since we already got a make compatible jobserver, I am thinking of adding --max-load option into jobserver crate such as what GNU make does. This will benefit for the whole rustc/cargo build system. I would love to give it a try if it is a possible solution 😀

@clarfonthey
Copy link
Author

clarfonthey commented Apr 23, 2021

Honestly, that might be a good start. I tried to remain as agnostic to the solution as possible in my original description, although I'm 100% with small changes that seem relatively inoffensive but might help make the problem more manageable.

IMHO whatever happens should reasonably work without the user specifying multiple options. It's reasonable for cargo to max out the system load, but ideally it shouldn't compete too strongly with other programs. Ideally it shouldn't compete with itself at all, since the use case of having multiple rust-analyzer/rls sessions open (by accident or intentionally) seems reasonable enough and cargo should have the ability to keep track of itself IMHO.

(Basically what I'm saying is it should probably have a reasonable default if you go down that path, but also we might need more stuff.)

@notriddle
Copy link
Contributor

Another reasonable possibility would be a throughput-measuring scheduler, such as hill climbing.

@epage
Copy link
Contributor

epage commented Nov 3, 2023

Closing in favor of #12912 so we keep the conversation in one place.

@epage epage closed this as not planned Won't fix, can't repro, duplicate, stale Nov 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-jobserver Area: jobserver, concurrency, parallelism C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` Performance Gotta go fast!
Projects
None yet
Development

No branches or pull requests

5 participants