We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
illegal_argument_exception
Hi have using helm chart with terrafrom
resource "helm_release" "fluentd" { name = "fluentd" repository = "https://fluent.github.io/helm-charts" chart = "fluentd" version = "0.5.2" namespace = kubernetes_namespace.elk.metadata[0].name depends_on = [kubernetes_namespace.elk, helm_release.elasticsearch] set { name = "variant" value = "elasticsearch8" } set { name = "fileConfigs.04_outputs\\.conf" value = file("yaml/fluentd-conf.txt") } }
fluentd-conf.txt
<label @OUTPUT> <match **> @type elasticsearch host "elasticsearch-master" port 9200 user elastic password XXXX #password changed verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix kubernetes </match> </label>
elastic search URL response
{ "name" : "elasticsearch-master-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "1KMjl5wQQD2S8EwQx_wRfQ", "version" : { "number" : "8.5.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "c1310c45fc534583afe2c1c03046491efba2bba2", "build_date" : "2022-11-09T21:02:20.169855900Z", "build_snapshot" : false, "lucene_version" : "9.4.1", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }
Fluent, elastic, and kibana pods are all running but getting errors while pushing to elastic, please check the logs at the end.
kubectl logs pod/fluentd-9cd79 -n elk 2024-11-12 09:41:04 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil 2024-11-12 09:41:04 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/../../../etc/fluent/fluent.conf" 2024-11-12 09:41:04 +0000 [info]: gem 'fluentd' version '1.16.2' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-concat' version '2.5.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-dedot_filter' version '1.0.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-detect-exceptions' version '0.0.15' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.3.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.2' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-json-in-json-2' version '1.0.2' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '3.2.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-parser-cri' version '0.1.1' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.1.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.1' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0' 2024-11-12 09:41:04 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.5' 2024-11-12 09:41:05 +0000 [warn]: [filter_kube_metadata] !! The environment variable 'K8S_NODE_NAME' is not set to the node name which can affect the API server and watch efficiency !! 2024-11-12 09:41:05 +0000 [info]: using configuration file: <ROOT> <label @FLUENT_LOG> <match **> @type null @id ignore_fluent_logs </match> </label> <source> @type tail @id in_tail_container_logs @label @KUBERNETES path "/var/log/containers/*.log" pos_file "/var/log/fluentd-containers.log.pos" tag "kubernetes.*" read_from_head true emit_unmatched_lines true <parse> @type "multi_format" unmatched_lines true <pattern> format json time_key "time" time_type string time_format "%Y-%m-%dT%H:%M:%S.%NZ" keep_time_key false </pattern> <pattern> format regexp expression /^(?<time>.+) (?<stream>stdout|stderr)( (.))? (?<log>.*)$/ time_format "%Y-%m-%dT%H:%M:%S.%NZ" keep_time_key false </pattern> </parse> </source> <source> @type prometheus bind "0.0.0.0" port 24231 metrics_path "/metrics" </source> <label @KUBERNETES> <match kubernetes.var.log.containers.fluentd**> @type relabel @label @FLUENT_LOG </match> <filter kubernetes.**> @type kubernetes_metadata @id filter_kube_metadata skip_labels false skip_container_metadata false skip_namespace_metadata true skip_master_url true </filter> <match **> @type relabel @label @DISPATCH </match> </label> <label @DISPATCH> <filter **> @type prometheus <metric> name fluentd_input_status_num_records_total type counter desc The total number of incoming records <labels> tag ${tag} hostname ${hostname} </labels> </metric> </filter> <match **> @type relabel @label @OUTPUT </match> </label> <label @OUTPUT> <match **> @type elasticsearch host "elasticsearch-master" port 9200 user "elastic" password xxxxxx verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix "kubernetes" </match> </label> </ROOT> 2024-11-12 09:41:05 +0000 [info]: starting fluentd-1.16.2 pid=7 ruby="3.1.4" 2024-11-12 09:41:05 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/fluentd/vendor/bundle/ruby/3.1.0/bin/fluentd", "-c", "/fluentd/etc/../../../etc/fluent/fluent.conf", "-p", "/fluentd/plugins", "--gemfile", "/fluentd/Gemfile", "-r", "/fluentd/vendor/bundle/ruby/3.1.0/gems/fluent-plugin-elasticsearch-5.3.0/lib/fluent/plugin/elasticsearch_simple_sniffer.rb", "--under-supervisor"] 2024-11-12 09:41:06 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil 2024-11-12 09:41:06 +0000 [info]: adding match in @FLUENT_LOG pattern="**" type="null" 2024-11-12 09:41:06 +0000 [info]: adding match in @KUBERNETES pattern="kubernetes.var.log.containers.fluentd**" type="relabel" 2024-11-12 09:41:06 +0000 [info]: adding filter in @KUBERNETES pattern="kubernetes.**" type="kubernetes_metadata" 2024-11-12 09:41:06 +0000 [warn]: #0 [filter_kube_metadata] !! The environment variable 'K8S_NODE_NAME' is not set to the node name which can affect the API server and watch efficiency !! 2024-11-12 09:41:06 +0000 [info]: adding match in @KUBERNETES pattern="**" type="relabel" 2024-11-12 09:41:06 +0000 [info]: adding filter in @DISPATCH pattern="**" type="prometheus" 2024-11-12 09:41:06 +0000 [info]: adding match in @DISPATCH pattern="**" type="relabel" 2024-11-12 09:41:06 +0000 [info]: adding match in @OUTPUT pattern="**" type="elasticsearch" 2024-11-12 09:41:06 +0000 [info]: adding source type="tail" 2024-11-12 09:41:06 +0000 [info]: adding source type="prometheus" 2024-11-12 09:41:06 +0000 [warn]: parameter 'logstash_format:' in <match **> @type elasticsearch host "elasticsearch-master" port 9200 user "elastic" password xxxxxx verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix "kubernetes" </match> is not used. 2024-11-12 09:41:06 +0000 [warn]: parameter 'request_timeout:' in <match **> @type elasticsearch host "elasticsearch-master" port 9200 user "elastic" password xxxxxx verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix "kubernetes" </match> is not used. 2024-11-12 09:41:06 +0000 [warn]: parameter 'reload_connections:' in <match **> @type elasticsearch host "elasticsearch-master" port 9200 user "elastic" password xxxxxx verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix "kubernetes" </match> is not used. 2024-11-12 09:41:06 +0000 [warn]: parameter 'reconnect_on_error:' in <match **> @type elasticsearch host "elasticsearch-master" port 9200 user "elastic" password xxxxxx verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix "kubernetes" </match> is not used. 2024-11-12 09:41:06 +0000 [warn]: parameter 'reload_on_failure:' in <match **> @type elasticsearch host "elasticsearch-master" port 9200 user "elastic" password xxxxxx verify_es_version_at_startup false scheme https ssl_verify false logstash_format: true request_timeout: 60 reload_connections: true reconnect_on_error: true reload_on_failure: true logstash_prefix "kubernetes" </match> is not used. 2024-11-12 09:41:06 +0000 [info]: #0 starting fluentd worker pid=18 ppid=7 worker=0 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/api-sawit-production-7fc44fb9bc-ldw5z_api-sawit-production_api-sawit-production-a794537a1e5c86038de96d04eb8bff1cdd523b4a7f915be73179bef3648b89ee.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/certmanager-cert-manager-webhook-88b749f54-dk62m_certmanager_cert-manager-webhook-91cb7f6844fdb05b34b5d56e74f63407ba0347e7c909db7343b79d72318381ef.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_apply-sysctl-overwrites-4a5e0ccd2103d246fa0a9a786eab58ee30e860c4ecf0c6b6ad545fc5ced7d3b6.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_cilium-agent-67b443fd3466cb8768da0a188409772d3f4bd923914baa2eb8677814260294e3.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_clean-cilium-state-16443df8a2ccba395430d5192d21d1aa14b2db221340e7fae06793d4638e364f.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_config-6e7ed5af6f8fa19bf5e9bf036cddb3a46f4c491ceaa16f053c280ecce65456a3.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_delay-cilium-for-ccm-93f90600764c125c92fe1e9ff7961a12305ff5f8e89b6025bb1045ef90af05f8.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_install-cni-binaries-43710b7ff59966bcec08fd4ab2fbd3cb5d9fb05d8d5c528d185833655a2ff2bb.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_mount-bpf-fs-aa97bea9d7569209c80698a645b64d3f98a9933ff58bb4e41e985e0251edd9b2.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cilium-792bf_kube-system_mount-cgroup-6f57ed5859f90ed3a0bb8faf257d2738bd07c17a2fd25dd17bcdbeb13a38f9da.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/coredns-6dbbdf458-wsmw2_kube-system_coredns-47cac14d567ee696ccc791cdeeb708f27ea42d232a9d77c387c330b2aab485e9.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cpc-bridge-proxy-tgvcf_kube-system_cpc-bridge-proxy-dc8351cb3244031821b9eace8b40b8699f29b0fcad8d0e07c8f94510d166362c.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/cpc-bridge-proxy-tgvcf_kube-system_init-iptables-cb3ba98bd1d7f792ba0fb87014bc2c0392b2cae72d8c34035cba958605d537ab.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/csi-do-node-zkr7d_kube-system_csi-do-plugin-a5d5f10f9571707dbc99f29f8e09fb7d6411f83667f49ff2e2c95e0980e285bd.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/csi-do-node-zkr7d_kube-system_csi-node-driver-registrar-0abf228676a78d004c70ee273ab0970b7ce82fc7a46c0014ed5ed26c2c9087c6.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/do-node-agent-hlxcv_kube-system_do-node-agent-5137dce512e7ce8db006fc2e389640b695938420331243dc008aea55f1db89a0.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/do-node-agent-hlxcv_kube-system_dynamic-config-3573eba4fc4bc4b744ff89cf9abba0d0cd94e5a7bc4a86244e5686f44d35fed9.log 2024-11-12 09:41:06 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/fluentd-9cd79_elk_fluentd-a4e13bf4e0b67972106f03af04e5be2bbf1bb9be8fdc72165d2e2de982a03c11.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/hubble-relay-6f967c7d65-lms8q_kube-system_hubble-relay-f992dc6088e21300a036d35f656b52f9ce3af2559cf7a1c0bf00980b1419ea80.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/hubble-ui-b755f4f46-z568x_kube-system_backend-ae236a9b5049d2376a35e715087037273c525c557472f45113c92c32aa8c80df.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/hubble-ui-b755f4f46-z568x_kube-system_frontend-dbbc038fe5b47c3f0c1b22bedf0ddd8b437387c1f590e028a4e3c00eea56f339.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/konnectivity-agent-rl84j_kube-system_konnectivity-agent-7f3a70a6ce6152534219d4ed9b66d1b98f3ba1d764de802d255c5cce2f6ac0e9.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/kube-prometheus-stack-prometheus-node-exporter-88vts_monitoring_node-exporter-d9227a0a500e0ec90b2b9ccc46ac441e62c0d26bf7fb0520e3ded7340a6ec547.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/kube-proxy-lmlmm_kube-system_kube-proxy-dc74dc70c9c41b3901865899281455395cf0e1ec9e285085068e4c87e042e160.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/laravel-sawit-oauth-f65c9c84-kl6jq_laravel-sawit-oauth_laravel-sawit-oauth-ede09bcd15d5c9a6948902bf5a1a5e98f76af2e71f2d835c223ce836d6a9f753.log 2024-11-12 09:41:07 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/sawit-admin-redis-6c8d7c4794-mtxsx_sawit-admin_sawit-admin-redis-a243e2e98a7026704009836a2d6c20add9019fed3c5b64a3f41bdda84a0832e2.log 2024-11-12 09:41:07 +0000 [info]: #0 fluentd worker is now running worker=0 2024-11-12 09:42:02 +0000 [info]: #0 [filter_kube_metadata] stats - namespace_cache_size: 19, pod_cache_size: 91, pod_cache_watch_misses: 16, pod_cache_watch_delete_ignored: 4, pod_cache_watch_updates: 17, pod_cache_watch_ignored: 4, pod_cache_api_updates: 2, id_cache_miss: 2, pod_cache_host_updates: 91, namespace_cache_host_updates: 19 2024-11-12 09:42:06 +0000 [warn]: #0 failed to flush the buffer. retry_times=0 next_retry_time=2024-11-12 09:42:08 +0000 chunk="626b404375a38e5802c93ed5760e24e5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): [400] {\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Action/metadata line [1] contains an unknown parameter [_type]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"Action/metadata line [1] contains an unknown parameter [_type]\"},\"status\":400}"
FYI: I entered the pod with port forwarding to check the elastic URL; it works perfectly inside the fluent pod.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi have using helm chart with terrafrom
fluentd-conf.txt
elastic search URL response
Fluent, elastic, and kibana pods are all running but getting errors while pushing to elastic, please check the logs at the end.
FYI: I entered the pod with port forwarding to check the elastic URL; it works perfectly inside the fluent pod.
The text was updated successfully, but these errors were encountered: