You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am experiencing difficulties with the Azure Log Analytics Exporter in a Kubernetes environment where ServiceMonitor resources are used to manage scraping configurations for Prometheus. Despite configuring separate probes with distinct workspaces, the exporter is querying metrics across all workspaces, leading to the collection of irrelevant metrics and numerous entries with a value of 0. This behaviour persists even after configuring the ServiceMonitor with specific parameters for each probe, intending to isolate the metric queries to their respective workspaces.
Steps to Reproduce:
Deploy the Azure Log Analytics Exporter in a Kubernetes cluster.
Configure multiple probes in the exporter, each with a unique workspace ID.
Set up ServiceMonitor resources for Prometheus to scrape metrics from the exporter, utilizing the probe-specific endpoints and workspace parameters.
Observe the scraped metrics in Prometheus, noting the presence of metrics from all workspaces in each probe's dataset.
Expected Behaviour:
Each probe should only fetch and return metrics relevant to its configured workspace, thereby preventing the mix-up of data across different workspaces.
Actual Behaviour:
All probes are fetching metrics from all workspaces, not just the ones they are configured to query, resulting in a large number of irrelevant metrics.
I suspect there may be a bug in the exporter’s handling of workspace-specific queries or in the ServiceMonitor configuration process that prevents the proper isolation of metrics per workspace.
Would you please help with how can figure out this issue ?
or even if the custom query doesn't have value don't set 0 value, just do not create it.
The text was updated successfully, but these errors were encountered:
I am experiencing difficulties with the Azure Log Analytics Exporter in a Kubernetes environment where ServiceMonitor resources are used to manage scraping configurations for Prometheus. Despite configuring separate probes with distinct workspaces, the exporter is querying metrics across all workspaces, leading to the collection of irrelevant metrics and numerous entries with a value of 0. This behaviour persists even after configuring the ServiceMonitor with specific parameters for each probe, intending to isolate the metric queries to their respective workspaces.
Steps to Reproduce:
Deploy the Azure Log Analytics Exporter in a Kubernetes cluster.
Configure multiple probes in the exporter, each with a unique workspace ID.
Set up ServiceMonitor resources for Prometheus to scrape metrics from the exporter, utilizing the probe-specific endpoints and workspace parameters.
Observe the scraped metrics in Prometheus, noting the presence of metrics from all workspaces in each probe's dataset.
Expected Behaviour:
Each probe should only fetch and return metrics relevant to its configured workspace, thereby preventing the mix-up of data across different workspaces.
Actual Behaviour:
All probes are fetching metrics from all workspaces, not just the ones they are configured to query, resulting in a large number of irrelevant metrics.
I suspect there may be a bug in the exporter’s handling of workspace-specific queries or in the ServiceMonitor configuration process that prevents the proper isolation of metrics per workspace.
Would you please help with how can figure out this issue ?
or even if the custom query doesn't have value don't set 0 value, just do not create it.
The text was updated successfully, but these errors were encountered: