-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any possible way to set the host_compiler explicitly instead of inferring from current toolchain? #194
Comments
You need to configure the cuda toolchain manually to achieve it. |
Out of curiosity, why do you want to use ccache with bazel? |
Could you please point out how to do it manually? I noticed that [1] https://github.com/bazel-contrib/rules_cuda/blob/main/cuda/private/actions/compile.bzl#L36 |
We're migrating from cmake to bazel, and we use ccache in cmake setup. Besides local cache, we also use secondary cache (remote cache) for ccache, I know Bazel also has remote cache support, but we'll still use ccache in the beginning during the migration. |
The toolchain are instantiated from https://github.com/bazel-contrib/rules_cuda/tree/main/cuda/templates |
BTW, bazel also has local cache |
I think you're talking about cuda toolchains. The host_compiler I mentioned above is from cc toolchain. |
Maybe I didn't say it clearly. We want to use ccache secondary cache, and we don't want to spend effort setting up bazel remote cache, so we'll still use ccache for some time. |
Oops, it seems I have remenbered it incorrectly. The |
I'm in a similar situation. In our use case, we use But we still want most of our code (non-CUDA code) compiled by Currently I have a very ugly solution by defining a series of special Any suggestion? |
@wudisheng I think nvcc can accept clang passed to -ccbin. |
@wudisheng You are describing a different problem. To make nvcc to use clang as host compiler (NOTE: you must ensure the host compiler for cuda and cc compiler for cc rules are the same)
Otherwise it should be a bug or misconfiguration. rules_cuda/cuda/private/toolchain_configs/nvcc.bzl Lines 66 to 78 in 8f2f2e6
|
For any reason you want to use a different host_compiler, here's what I did in my project.
I know it's kind of hacky, and I don't recommend people doing it. It just fits my need, and works in my project. [1] https://github.com/bazel-contrib/rules_cuda/blob/v0.2.1//cuda/private/actions/compile.bzl#L36 |
@wyhao31 I think you can avoid a patch by Warning this is not recommended, especially on Windows! Warning this will break hermeticity |
In our specific environment (e.g. versions), no. And even if it can, we'd like |
Thanks! Actually, I tried this approach before, but it turns out not working. The reason is that cuda_compile_action implies host_compiler_path[1], so it seems host_compiler_path cannot be disabled. [1] https://github.com/bazel-contrib/rules_cuda/blob/v0.2.1/cuda/private/toolchain_configs/nvcc.bzl#L111 |
The "%{host_compiler}" here is It seems not supported out of box, and I can hack it, I'm trying to explore a better way to integrate it with the new platform-based toolchain resolution. |
I'm curious if it is possible to let bazel to use two differently configured toolchains for the same rule (different targets) in a single If it is possible, we can implement a If not possible. Then it should be left as a hack because it will easily break a lot of things... |
Technically, yes. The official way is to use transitions, which is considerably difficult. For legacy Using this way I can get the behavior I described above, but it is not working if I enable platform-based toolchain resolution, because there isn't a way to make a particular rule as a constraint, and config_settings are generally global. |
I also ran into this issue. Maybe this is a naive question, but can't we covert the private attribute |
This sounds like a weird request, and here's my use case.
I use a customized C++ toolchain in my project. In order to use ccache, I wrap the actually compiling command into a script, and put the script path as tool_path in the toolchain definition.
When using
cuda_library
,/path/to/wrap-script
appears as the parameter afternvcc -ccbin
, this will cause nvcc hangs forever. The correct way of using ccache for cuda libraries is to wrap the nvcc call with ccache, not gcc. This is why I want to set the host_compiler explicitly.I'm not sure whether I'm using it correctly, or there's an existing way to bypass it. Thanks in advance for any suggestion.
The text was updated successfully, but these errors were encountered: