You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ curl http://localhost:14269/metrics
curl: (7) Failed to connect to localhost port 14269 after 0 ms: Connection refused
Expected behavior
I expect to see prometheus style metric as per
$ curl http://localhost:14269/metrics
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.2451e-05
go_gc_duration_seconds{quantile="0.25"} 2.2158e-05
... <snip> ...
Relevant log output
The problem is found here, when creating the collector the port setting is mismatched with what is exposed in the pod versus what the collector is configured to use.
$ kubectl logs -f pod/simple-query-65689877c8-xdfj7
Defaulted container "jaeger-query" out of: jaeger-query, jaeger-agent
2024/03/06 01:04:41 maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
2024/03/06 01:04:41 applicatio version: git-commit=9866eba85aed1b0a66a77c8c6928a372edc5040f, git-version=v1.52.0, build-date=2023-12-06T09:43:23Z
{"level":"info","ts":1709687081.8386724,"caller":"flags/service.go:119","msg":"Mounting metrics handler on admin server","route":"/metrics"}
{"level":"info","ts":1709687081.8387108,"caller":"flags/service.go:125","msg":"Mounting expvar handler on admin server","route":"/debug/vars"}
{"level":"info","ts":1709687081.8391225,"caller":"flags/admin.go:129","msg":"Mounting health check on admin server","route":"/"}
{"level":"info","ts":1709687081.839157,"caller":"flags/admin.go:143","msg":"Starting admin HTTP server","http-addr":":16687"}
The collector starts up using port 16687 instead of the pod exposed port of 14269.
Screenshot
No response
Additional context
The problem can be worked around by explicitly setting the port on the collector pod using the config section
Thanks @gwvandesteeg for creating an issue. Would you like to send a PR?
I wouldn't know where to start at this stage with regards to how the operator creates the collector and sets the relevant settings. If i get some time I'd be happy to give it a go, but this is generally unlikely to occur speedily, if anyone else wants a go at it then go for it.
What happened?
When a Jaeger instance is configured via CR
I want to scrape the collectors metrics using prometheus
Steps to reproduce
Create a Jaeger instance using the CR without specifying an explicit HTTP host-port setting for the collector
Check the port exposed as the admin port
Port-forward to the "admin-http" port specific in the Pod specification
curl the metrics endpoint
Expected behavior
I expect to see prometheus style metric as per
Relevant log output
Screenshot
No response
Additional context
The problem can be worked around by explicitly setting the port on the collector pod using the config section
Problem may be related to: #1445
Jaeger backend version
v1.52.0
SDK
n/a
Pipeline
n/a
Stogage backend
n/a
Operating system
n/a
Deployment model
Kind kubernetes cluster
Deployment configs
No response
The text was updated successfully, but these errors were encountered: