Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tracing] Implement initial weblog tests for the runtime metrics feature #3878

Merged
merged 7 commits into from
Jan 31, 2025
1 change: 1 addition & 0 deletions manifests/cpp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,7 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Empty: missing_feature
Test_Config_RuntimeMetrics_Default: missing_feature
Test_Config_RuntimeMetrics_Enabled: missing_feature
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: missing_feature
Test_Config_UnifiedServiceTagging_CustomService: missing_feature
Test_Config_UnifiedServiceTagging_Default: missing_feature
test_distributed.py:
Expand Down
3 changes: 2 additions & 1 deletion manifests/dotnet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,8 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Default: v3.4.1
Test_Config_ObfuscationQueryStringRegexp_Empty: v3.4.1
Test_Config_RuntimeMetrics_Default: incomplete_test_app (test needs to account for dotnet runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (test needs to account for dotnet runtime metrics)
Test_Config_RuntimeMetrics_Enabled: v2.0.0
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: v2.0.0
Test_Config_UnifiedServiceTagging_CustomService: v3.3.0
Test_Config_UnifiedServiceTagging_Default: v3.3.0
test_data_integrity.py:
Expand Down
3 changes: 2 additions & 1 deletion manifests/golang.yml
Original file line number Diff line number Diff line change
Expand Up @@ -600,7 +600,8 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Default: v1.67.0
Test_Config_ObfuscationQueryStringRegexp_Empty: v1.67.0
Test_Config_RuntimeMetrics_Default: incomplete_test_app (test needs to account for golang runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (test needs to account for golang runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (should be in v1.18.0 but we're not receiving series in tests)
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: incomplete_test_app (should be in v1.18.0 but we're not receiving series in tests)
Test_Config_UnifiedServiceTagging_CustomService: v1.67.0
Test_Config_UnifiedServiceTagging_Default: v1.67.0
test_data_integrity.py:
Expand Down
11 changes: 10 additions & 1 deletion manifests/java.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1711,7 +1711,16 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Default: v1.39.0
Test_Config_ObfuscationQueryStringRegexp_Empty: v1.39.0
Test_Config_RuntimeMetrics_Default: incomplete_test_app (test needs to account for java runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (test needs to account for java runtime metrics)
Test_Config_RuntimeMetrics_Enabled:
'*': v0.64.0
spring-boot-3-native: missing_feature (GraalVM. Tracing support only)
spring-boot-openliberty: incomplete_test_app (needs investigation to understand why test is failing)
spring-boot-wildfly: incomplete_test_app (needs investigation to understand why test is failing)
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId:
'*': v0.64.0
spring-boot-3-native: missing_feature (GraalVM. Tracing support only)
spring-boot-openliberty: incomplete_test_app (needs investigation to understand why test is failing)
spring-boot-wildfly: incomplete_test_app (needs investigation to understand why test is failing)
Test_Config_UnifiedServiceTagging_CustomService: v1.39.0
Test_Config_UnifiedServiceTagging_Default: v1.39.0
test_data_integrity.py:
Expand Down
1 change: 1 addition & 0 deletions manifests/nodejs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -860,6 +860,7 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Empty: *ref_3_0_0
Test_Config_RuntimeMetrics_Default: *ref_3_0_0
Test_Config_RuntimeMetrics_Enabled: *ref_3_0_0
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: missing_feature
Test_Config_UnifiedServiceTagging_CustomService: *ref_5_25_0
Test_Config_UnifiedServiceTagging_Default: *ref_5_25_0
test_distributed.py:
Expand Down
1 change: 1 addition & 0 deletions manifests/php.yml
Original file line number Diff line number Diff line change
Expand Up @@ -479,6 +479,7 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Empty: v1.5.0
Test_Config_RuntimeMetrics_Default: missing_feature
Test_Config_RuntimeMetrics_Enabled: missing_feature
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: missing_feature
Test_Config_UnifiedServiceTagging_CustomService: v1.4.0
Test_Config_UnifiedServiceTagging_Default: v1.4.0
test_distributed.py:
Expand Down
2 changes: 2 additions & 0 deletions manifests/python.yml
Original file line number Diff line number Diff line change
Expand Up @@ -895,6 +895,8 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Empty: v2.15.0
Test_Config_RuntimeMetrics_Default: incomplete_test_app (test needs to account for python runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (test needs to account for python runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (Python seems to send sketches instead of gauges/counters)
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: missing_feature
Test_Config_UnifiedServiceTagging_CustomService: v2.0.0
Test_Config_UnifiedServiceTagging_Default: v2.0.0
test_data_integrity.py:
Expand Down
3 changes: 2 additions & 1 deletion manifests/ruby.yml
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,8 @@ tests/:
Test_Config_ObfuscationQueryStringRegexp_Default: bug (APMAPI-1013)
Test_Config_ObfuscationQueryStringRegexp_Empty: missing_feature (environment variable is not supported)
Test_Config_RuntimeMetrics_Default: incomplete_test_app (test needs to account for ruby runtime metrics)
Test_Config_RuntimeMetrics_Enabled: incomplete_test_app (test needs to account for ruby runtime metrics)
Test_Config_RuntimeMetrics_Enabled: missing_feature (should be in v0.44.0 but we're not receiving series in tests)
Test_Config_RuntimeMetrics_Enabled_WithRuntimeId: missing_feature
Test_Config_UnifiedServiceTagging_CustomService: v2.0.0
Test_Config_UnifiedServiceTagging_Default: v2.0.0
test_distributed.py:
Expand Down
66 changes: 54 additions & 12 deletions tests/test_config_consistency.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,14 @@

import re
import json
import time
from utils import weblog, interfaces, scenarios, features, rfc, irrelevant, context, bug, missing_feature
from utils.tools import logger

# get the default log output
stdout = interfaces.library_stdout if context.library != "dotnet" else interfaces.library_dotnet_managed
runtime_metrics = {"nodejs": "runtime.node.mem.heap_total"}
runtime_metrics_langs = [".NET", "go", "nodejs", "python", "ruby"]
log_injection_fields = {"nodejs": {"message": "msg"}}


Expand Down Expand Up @@ -555,19 +557,59 @@ def test_log_injection_128bit_traceid_disabled(self):
@scenarios.runtime_metrics_enabled
@features.tracing_configuration_consistency
class Test_Config_RuntimeMetrics_Enabled:
"""Verify runtime metrics are enabled when DD_RUNTIME_METRICS_ENABLED=true"""
"""Verify runtime metrics are enabled when DD_RUNTIME_METRICS_ENABLED=true and that they have the proper tags"""

# This test verifies runtime metrics by asserting the prescene of a metric in the dogstatsd endpoint
def test_config_runtimemetrics_enabled(self):
for data in interfaces.library.get_data("/dogstatsd/v2/proxy"):
lines = data["request"]["content"].split("\n")
metric_found = False
for line in lines:
if runtime_metrics[context.library.library] in line:
metric_found = True
break
assert metric_found, f"The metric {runtime_metrics[context.library.library]} was not found in any line"
break
def setup_main(self):
self.req = weblog.get("/")

def test_main(self):
assert self.req.status_code == 200

time.sleep(12)
cbeauchesne marked this conversation as resolved.
Show resolved Hide resolved
runtime_metrics = [
metric
for _, metric in interfaces.agent.get_metrics()
if metric["metric"].startswith("runtime.") or metric["metric"].startswith("jvm.")
]
assert len(runtime_metrics) > 0
link04 marked this conversation as resolved.
Show resolved Hide resolved

for metric in runtime_metrics:
tags = {tag.split(":")[0]: tag.split(":")[1] for tag in metric["tags"]}
assert tags.get("lang") in runtime_metrics_langs or tags.get("lang") is None

# Test that Unified Service Tags are added to the runtime metrics
assert tags["service"] == "weblog"
assert tags["env"] == "system-tests"
assert tags["version"] == "1.0.0"

# Test that DD_TAGS are added to the runtime metrics
# DD_TAGS=key1:val1,key2:val2 in default weblog containers
assert tags["key1"] == "val1"
assert tags["key2"] == "val2"


@scenarios.runtime_metrics_enabled
@features.tracing_configuration_consistency
class Test_Config_RuntimeMetrics_Enabled_WithRuntimeId:
"""Verify runtime metrics are enabled when DD_RUNTIME_METRICS_ENABLED=true and that they have the runtime-id tag"""

def setup_main(self):
self.req = weblog.get("/")

def test_main(self):
assert self.req.status_code == 200

time.sleep(12)
runtime_metrics = [
metric
for _, metric in interfaces.agent.get_metrics()
if metric["metric"].startswith("runtime.") or metric["metric"].startswith("jvm.")
]
assert len(runtime_metrics) > 0

for metric in runtime_metrics:
tags = {tag.split(":")[0]: tag.split(":")[1] for tag in metric["tags"]}
assert "runtime-id" in tags


@rfc("https://docs.google.com/document/d/1kI-gTAKghfcwI7YzKhqRv2ExUstcHqADIWA4-TZ387o/edit#heading=h.8v16cioi7qxp")
Expand Down
2 changes: 2 additions & 0 deletions utils/_context/_scenarios/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -783,7 +783,9 @@ class _Scenarios:

runtime_metrics_enabled = EndToEndScenario(
"RUNTIME_METRICS_ENABLED",
weblog_env={"DD_DOGSTATSD_START_DELAY": "0"},
zacharycmontoya marked this conversation as resolved.
Show resolved Hide resolved
runtime_metrics_enabled=True,
use_proxy_for_weblog=False,
doc="Test runtime metrics",
)

Expand Down
1 change: 1 addition & 0 deletions utils/build/docker/agent.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ RUN set -eux;\
# Datadog agent conf
RUN echo '\
log_level: DEBUG\n\
dogstatsd_non_local_traffic: true\n\
apm_config:\n\
apm_non_local_traffic: true\n\
trace_buffer: 5\n\
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ COPY --from=build /binaries/SYSTEM_TESTS_LIBRARY_VERSION SYSTEM_TESTS_LIBRARY_VE
COPY --from=build /app/target/myproject-0.0.1-SNAPSHOT.jar /app/app.jar
COPY --from=build /dd-tracer/dd-java-agent.jar .

ENV DD_JMXFETCH_ENABLED=false
ENV DD_TRACE_HEADER_TAGS='user-agent:http.request.headers.user-agent'
# FIXME: Fails on APPSEC_BLOCKING, see APPSEC-51405
# ENV DD_TRACE_INTERNAL_EXIT_ON_FAILURE=true
Expand Down
1 change: 1 addition & 0 deletions utils/build/docker/java/spring-boot-wildfly.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ COPY --from=build /dd-tracer/dd-java-agent.jar .
COPY ./utils/build/docker/java/app.sh /app/app.sh
RUN chmod +x /app/app.sh

ENV DD_APP_CUSTOMLOGMANAGER=true
ENV DD_TRACE_HEADER_TAGS='user-agent:http.request.headers.user-agent'
ENV DD_TRACE_INTERNAL_EXIT_ON_FAILURE=true
ENV APP_EXTRA_ARGS="-Djboss.http.port=7777 -b=0.0.0.0"
Expand Down
16 changes: 16 additions & 0 deletions utils/interfaces/_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,22 @@ def get_spans(self, request=None):
def get_spans_list(self, request):
return [span for _, span in self.get_spans(request)]

def get_metrics(self):
"""Attempts to fetch the spans the agent will submit to the backend.
link04 marked this conversation as resolved.
Show resolved Hide resolved

When a valid request is given, then we filter the spans to the ones sampled
during that request's execution, and only return those.
"""

for data in self.get_data(path_filters="/api/v2/series"):
if "series" not in data["request"]["content"]:
raise ValueError("series property is missing in agent payload")

content = data["request"]["content"]["series"]

for point in content:
yield data, point

def get_dsm_data(self):
return self.get_data(path_filters="/api/v0.1/pipeline_stats")

Expand Down