Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[otelcol.receiver.loki] Status Code 503, Message=telemetry type is not supported #2045

Open
StefanSa opened this issue Nov 6, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@StefanSa
Copy link

StefanSa commented Nov 6, 2024

What's wrong?

Hi there,
i get this error message when i try to send logs to the internal alloy (Docker compose stack).

ts=2024-11-04T14:11:02.880506132Z level=error msg="Exporting failed. Dropping data." component_path=/ component_id=otelcol.exporter.otlphttp.default error="interrupted due to shutdown: Throttle (0s), error: rpc error: code = Unavailable desc = error exporting items, request to http://172.17.34.36:34318/v1/logs responded with HTTP Status Code 503, Message=telemetry type is not supported, Details=[]" dropped_items=267

However, when i send metrics i have no problem with it.

Here is my local alloy config:

local.file_match "logs_integrations_integrations_node_exporter_direct_scrape" {
  path_targets = [{
    __address__ = "localhost",
    __path__    = "/var/log/{syslog,messages,*.log}",
    instance    = constants.hostname,
    job         = "integrations/node_exporter",
  }]
}

loki.source.file "logs_integrations_integrations_node_exporter_direct_scrape" {
  targets    = local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets
  forward_to = [otelcol.receiver.loki.default.receiver]
}

otelcol.receiver.loki "default" {
  output {
      logs    = [otelcol.processor.batch.batch_loki.input]
  }
}


otelcol.processor.batch "batch_loki" {
  output {
    logs = [otelcol.exporter.otlphttp.default.input]
  }
}


otelcol.exporter.otlphttp "default" {
  client {
    endpoint = "http://172.17.34.36:34318"
  }
}

an internal docker-compose alloy:

/*
Module Components: process_and_transform

Description: Traces data collection processing and transformation
*/

// Processing And Transformation
declare "process_and_transform" {

	/*****************************************************************
	* ARGUMENTS
	*****************************************************************/
	argument "traces_forward_to" {
		comment = "Must be a list(TracesReceiver) where collected traces should be forwarded to"
	}

	argument "logs_forward_to" {
		comment = "Must be a list(LogsReceiver) where collected logs should be forwarded to"
	}

	argument "metrics_forward_to" {
		comment = "Must be a list(MetricsReceiver) where collected metrics should be forwarded to"
	}

	/*****************************************************************
	* Jaeger for Metrics Logs Traces
	*****************************************************************/
	otelcol.receiver.jaeger "default" {
		protocols {
			grpc {
				endpoint = "0.0.0.0:14250"
			}

			thrift_http {
				endpoint = "0.0.0.0:14268"
			}

			thrift_binary {
				endpoint = "0.0.0.0:6832"
			}

			thrift_compact {
				endpoint = "0.0.0.0:6831"
			}
		}

		output {
			metrics = [otelcol.processor.batch.default.input]
			logs    = [otelcol.processor.resourcedetection.default.input]
			traces  = [otelcol.processor.resourcedetection.default.input]
		}
	}

	/*****************************************************************
	* Otelcol for Metrics Logs Traces
	*****************************************************************/
	otelcol.receiver.otlp "default" {
		grpc {
			endpoint = "0.0.0.0:4317"
		}

		http {
			endpoint = "0.0.0.0:4318"
		}

		output {
			metrics = [otelcol.processor.batch.default.input]
			logs    = [otelcol.processor.resourcedetection.default.input]
			traces  = [
				otelcol.processor.resourcedetection.default.input,
				otelcol.connector.spanlogs.autologging.input,
			]
		}
	}

	otelcol.processor.resourcedetection "default" {
		detectors = ["env"]

		output {
			logs   = [otelcol.processor.attributes.default.input]
			traces = [otelcol.processor.attributes.default.input]
		}
	}

	otelcol.processor.attributes "default" {
		// Inserts a new attribute "cluster" to spans where the key doesn't exist.
		action {
			key    = "cluster"
			value  = "docker-compose"
			action = "insert"
		}

		action {
			key    = "namespace"
			value  = "monitoring-system"
			action = "insert"
		}

		output {
			metrics = [otelcol.processor.transform.add_resource_attributes.input]
			traces  = [otelcol.processor.transform.add_resource_attributes.input]
		}
	}

	otelcol.processor.transform "add_resource_attributes" {
		error_mode = "ignore"

		log_statements {
			context    = "resource"
			statements = [
				`set(attributes["cluster"], attributes["cluster"]) where attributes["cluster"] == nil`,
				`set(attributes["namespace"], attributes["namespace"]) where attributes["namespace"] == nil`,
			]
		}

		trace_statements {
			context    = "resource"
			statements = [
				`set(attributes["cluster"], "docker-compose") where attributes["cluster"] == nil`,
				`set(attributes["namespace"], "monitoring-system") where attributes["namespace"] == nil`,
				`set(attributes["app"], attributes["service.name"]) where attributes["app"] == nil`,
			]
		}

		output {
			logs   = [otelcol.processor.filter.default.input]
			traces = [otelcol.processor.filter.default.input]
		}
	}

	otelcol.processor.filter "default" {
		error_mode = "ignore"

		output {
			logs   = [otelcol.processor.batch.default.input]
			traces = [otelcol.processor.batch.default.input]
		}
	}

	otelcol.processor.batch "default" {
		send_batch_size     = 16384
		send_batch_max_size = 0
		timeout             = "5s"

		output {
			metrics = [otelcol.processor.memory_limiter.default.input]
			logs    = [otelcol.processor.memory_limiter.default.input]
			traces  = [otelcol.processor.memory_limiter.default.input]
		}
	}

	otelcol.processor.memory_limiter "default" {
		check_interval         = "1s"
		limit_percentage       = 50
		spike_limit_percentage = 30

		output {
			metrics = [otelcol.exporter.prometheus.tracesmetrics.input]
			logs    = [otelcol.exporter.loki.traceslogs.input]
			traces  = argument.traces_forward_to.value
		}
	}

	otelcol.exporter.prometheus "tracesmetrics" {
		forward_to = argument.metrics_forward_to.value
	}

	otelcol.exporter.loki "traceslogs" {
		forward_to = [loki.process.traceslogs.receiver]
	}

	// The OpenTelemetry spanlog connector processes incoming trace spans and extracts data from them ready
	// for logging.
	otelcol.connector.spanlogs "autologging" {
		// We only want to output a line for each root span (ie. every single trace), and not for every
		// process or span (outputting a line for every span would be extremely verbose).
		spans     = false
		roots     = true
		processes = false

		// We want to ensure that the following three span attributes are included in the log line, if present.
		span_attributes = [
			"http.method",
			"http.target",
			"http.status_code",
		]

		// Overrides the default key in the log line to be `traceId`, which is then used by Grafana to
		// identify the trace ID for correlation with the Tempo datasource.
		overrides {
			trace_id_key = "traceId"
		}

		// Send to the OpenTelemetry Loki exporter.
		output {
			logs = [otelcol.exporter.loki.autologging.input]
		}
	}

	// Simply forwards the incoming OpenTelemetry log format out as a Loki log.
	// We need this stage to ensure we can then process the logline as a Loki object.
	otelcol.exporter.loki "autologging" {
		forward_to = [loki.process.autologging.receiver]
	}

	// The Loki processor allows us to accept a correctly formatted Loki log and mutate it into
	// a set of fields for output.
	loki.process "autologging" {
		// The JSON stage simply extracts the `body` (the actual logline) from the Loki log, ignoring
		// all other fields.
		stage.json {
			expressions = {"body" = ""}
		}
		// The output stage takes the body (the main logline) and uses this as the source for the output
		// logline. In this case, it essentially turns it into logfmt.
		stage.output {
			source = "body"
		}

		forward_to = [loki.process.traceslogs.receiver]
	}

	loki.process "traceslogs" {
		stage.tenant {
			value = "anonymous"
		}

		forward_to = argument.logs_forward_to.value
	}

	/*****************************************************************
	* EXPORTS
	*****************************************************************/
	export "alloy_traces_input" {
		value = otelcol.processor.resourcedetection.default.input
	}
}

somehow reminds me of this problem:
open-telemetry/opentelemetry-collector-contrib#27682

What am i doing wrong here, or what is the problem?

Thanks for any help.

Steps to reproduce

With the configuration shown here, try to send logs.

System information

Linux 6.4.0

Software version

v1.4.3

Configuration

No response

Logs

No response

@StefanSa StefanSa added the bug Something isn't working label Nov 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant