-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(detector-gpc): suppress tracing when requesting for GCP metadata #2321
fix(detector-gpc): suppress tracing when requesting for GCP metadata #2321
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2321 +/- ##
==========================================
- Coverage 90.97% 90.43% -0.55%
==========================================
Files 146 148 +2
Lines 7492 7326 -166
Branches 1502 1518 +16
==========================================
- Hits 6816 6625 -191
- Misses 676 701 +25
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: missing license header, and mixed 4- and 2-space indents in this file. Unfortunately the eslint setup doesn't lint .js files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! maybe we could also make eslint work in js (but not in this PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definite not this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definite not this PR.
assertEmptyResource(resource); | ||
}); | ||
}); | ||
|
||
describe('internal tracing', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I guess I forgot to hit "save" in my first review.)
Wow this was a fair amount of effort to write a test case for this.
I don't know how others would feel, but I wouldn't require similar test code be added for suppressTracing usage in other detectors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup it was a fair amount of work 😓. But I needed
- the
sdk
running so I had to create the fixture - to avoid
nock
since real requests won't happen so we won't be really testing gcp-metadata
to get responses. I could let it fail but I found that the request time vary in a way that makes the test flaky.
there is another approach I can take but 1st I wanted some feeedback for this one. If we finally use this one I may be inclined to move the mock server to @opentelemetry/contrib-test-utils
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the 3rd point in my previous comment I found that if the async attributes of the resource are resolved after calling sdk.shutdown()
we loose some spans. Without knowing to much about the mechanics of the processors/exporters I'd guess that these spans are not yet in the queue that is flushed during shutdown. I think this could be a potential issue for apps that have a short execution time (thinking on lamba or similar)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@trentm the other approach seems to work and it's shorter 🎉 (although we're getting a resource with empty values). Changes are
- remove the mock server
- trigger the detection again and
waitForAsyncAttributes
diff --git a/detectors/node/opentelemetry-resource-detector-gcp/test/fixtures/use-gcp-detector.js b/detectors/node/opentelemetry-resource-detector-gcp/test/fixtures/use-gcp-detector.js
index 0c71a081..f88d2ae4 100644
--- a/detectors/node/opentelemetry-resource-detector-gcp/test/fixtures/use-gcp-detector.js
+++ b/detectors/node/opentelemetry-resource-detector-gcp/test/fixtures/use-gcp-detector.js
@@ -78,6 +78,9 @@ server.listen(0, '127.0.0.1', async function () {
});
// flush any left spans
+ // NOTE: this adds extra requests but its necessary to make sure
+ // spans have the resouce and are queued in the exporter
+ await gcpDetector.detect().waitForAsyncAttributes();
await sdk.shutdown();
await new Promise(resolve => server.close(resolve));
});
A couple of observations:
- test becomes slower since the requests are made again at the end of the fixture code
- this detector just falls back to empty string if the request fails but I'm not sure if this will work on other detectors
I'm keen to use this simplified way but it is possible we have to do the server mock for other detectors (I hope not). WDYT?
After a discussion with @trentm I've decided to do a different approach:
|
Which problem is this PR solving?
This is a 1st PR for fixing #2320. Some of the detectors are using
http
or other instrumented modules to get resource related data. In this case GCP resource detector is usinghttp
module to get info and this results on some spans being exported. These spans are not related to the instrumented service so it shouldn't be thereShort description of the changes
GcpDetector
class to implementDetectorSync
interfacecontext.with
along withsuppresTracing
API to make sure spans are not exported