-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Assigning opentelemetry_otlp
layer causes hang when using tokio::spawn
.
#2071
Comments
I managed to reproduce this in a debugger. It seems this crate assumes that futures_executor::block_on plays nicely with Tokio which I assume is not the case. It would be good to find a proper solution to this, but in the meantime you can install_batch() instead of install_simple(), which will use a Tokio backed executor and seems to sidestep the problem. |
I also experience the same issue while updating from
to
It was working fine before and after updating it started to hang. Using |
Thanks for sharing more details! You are right. This is broken due to #1612, which used |
Attached the debugger on the otel-rust code, to see exactly which line it is stuck at, it is exactly here - opentelemetry-rust/opentelemetry-proto/src/proto/tonic/opentelemetry.proto.collector.trace.v1.rs Line 183 in 976bc54
which is called in below flow:
So, it hangs in processing the gRPC response from collector. The issue is same as describer here - |
@lalitb Thanks for digging further! It looks like we need to redesign SimpleProcessors too, as it cannot be used with |
I don't know if it is related to this, but I've seen that opentelemetry-otlp v0.25 forces the downgrade of Tokio v1.40 to Tokio v1.38.1 because apparently the toml uses a "~" dependency (as seen in here: https://crates.io/crates/opentelemetry-otlp/0.25.0/dependencies). The previous version (v0.17) used a caret dependency (https://crates.io/crates/opentelemetry-otlp/0.17.0/dependencies). |
What happened?
Hey! I'm having an issue when trying to use an
opentelemetry_otlp
layer in atracing_subscriber
, when spawning another task withtokio::spawn
.I've managed to make a minimal reproduction example. Here's the
Cargo.toml
that I have currently:And this is the code that hangs:
If I run the code as is, this is the output that I get:
The program seems to hang when exiting
inner_function()
. This happens regardless of whether the OTLP collector is running on the system, or whether the Tokio runtime is running in a multi-thread or single-thread mode. However, if I don't add the OTLP layer, or if I stop usingtokio::spawn()
, and instead callsome_function().await
directly, then the program behaves as expected:Any ideas on what's going wrong here? Thank you in advance!
NOTE 1: I've also created an issue in the
tracing-opentelemetry
repository in the event that it's actuallytracing-subscriber
causing the issue: tokio-rs/tracing-opentelemetry#162NOTE 2: I also noticed a similar looking issue in #1745, but I wasn't sure if it was the same bug as they do not use a Tokio runtime in their example.
API Version
0.24
SDK Version
N/A
What Exporter(s) are you seeing the problem on?
OTLP
Relevant log output
The text was updated successfully, but these errors were encountered: