-
-
Notifications
You must be signed in to change notification settings - Fork 203
Add SpanProcessor for OpenTelemetry #875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
0ebd132
to
dee9540
Compare
@whatyouhide ☝🏻 I'm having trouble figuring out these failures. Could you tell me how I could debug what exactly causes the :sentry app startup failure? We start it manually in the umbrella test helpers and for whatever reason it started failing when opentelemetry is included. |
Adding just the OTel deps without any other changes leads to the same issue? |
please write some minimal description in the PR about
|
@whatyouhide @sl0thentr0py I narrowed it down to |
@whatyouhide for the time being I addressed it by using optional deps for otel packages via 6fdf121 but then one of the tests in event_test.exs started to fail so I fixed it via d04c5d3 even though I don't understand what's going on there 🙃 |
@solnic is this ready for review? It's still a draft |
Not yet. I got it working but phoenix + bandit spans are not processed in a way that would make sense for Sentry for some reason. I've been investigating how to fix. It seems like phoenix spans are not coming in as children of bandit spans so there's a disconnect here. I'll figure it out 🙂 |
If we're just shipping a We will revisit packaging later once we have a proper working prototype. |
b607ac1
to
bdcc5de
Compare
@sl0thentr0py @whatyouhide this is now open for reviews. I got it deployed to production already and it's working well (see screenshots in the description). |
okay I'm taking a preliminary look now one thing we definitely need is that instead of the boolean
without these, it is very hard for people to control quota spend so this is a hard requirement |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some suggestions, looks very good otherwise!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking like a good start but still a bunch of work left to do. Let me know if any of the comments are not clear!
9fcc55e
to
65c039c
Compare
@solnic I’m assuming you're still working on this so re-request my review if this gets ready again. |
@whatyouhide yes, wrapping it up today, still a couple of things to address 🙃 |
c9152c6
to
1654724
Compare
@sl0thentr0py I'll add traces_sampler config in a separate PR |
01585a8
to
5e8fc99
Compare
This was redundant - we store the attributes in the trace context
48fbd08
to
f1183b3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!!
and thank you for the feature
The remaining feedback will be addressed in follow-up PRs as this one has become too big with over 140 comments.
@solnic Why did this get reverted? Is there no OpenTelemetry span support now? |
@jwaldrip we had to revert to push 10.10.0 with OTP 28 fixes. It'll be restored soon with improved Sampler that's the missing piece for finally releasing it. |
* Revert "Revert "Add SpanProcessor for OpenTelemetry (#875)"" This reverts commit 2ced90e. * Better span sampler (#903) * Better span sampler This updates OpenTelemetry.Sampler so that it uses `traces_sample_rate` for sampling spans before they get sent to the span processor. This way we're not processing spans when they are dropped during sampling, which was previously the case as the Client was responsible for sampling just before attemping to send a transaction. * Enhance sampling logic to record discarded transactions * Clarify trace-level sampling decision-making Previously it incorrectly referred to "parent" where in reality we're dealing with spans from the same trace and respect sampling decisions that were already made for a given trace. This is not the same thing as parent/child spans that we're handling in the SpanProcessor as this type of relationship is established post-sampling. * Remove unnecessary sleep calls in sampler tests * Fix flaky test * Clean up make_sampling_decision * Simplify client reports for dropped transactions * Update traces_sample_rate default value to be nil * Account for `nil` sample_rate when sampling * Use put_test_config * Update traces_sample_rate spec to allow nil value * Support for traces_sampler option (#910) * Add support for `traces_sampler` config option * Update docs * Remove unnecessary attribute merging * Call traces_sampler only for root spans * Reuse sampling decision logic * Add test for handling invalid traces_sampler return values * Add tests for SamplingContext access functions * Add basic integration tests for OpenTelemetry (#912) * Update CHANGELOG
To make this work you need to add opentelemetry deps:
Then configure our span processor:
Things should start looking more or less like this: