Replies: 2 comments 1 reply
-
I'm doing this on filebeat itself:
This could work for you as well. |
Beta Was this translation helpful? Give feedback.
1 reply
-
In your remap component you can use the merge() function to combine your parsed transforms:
parse_logs:
type: remap
inputs: ["dummy_logs"]
source: |
json = parse_json!(.message)
json.log_timestamp = del(json.@timestamp)
merge(., object!(json))
del(.message)
del(.agent)
del(.ecs)
del(.input)
del(.log)
''' |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Kafka log content is about as follows:
{ "@metadata": { "beat": "filebeat", "type": "_doc", "version": "7.10.0" }, "message": { "upstream_response_time": "0.056", "upstream_status": "200" } }
And Finally, there will be more message fields written in elasticsearch,as follows
The vector configuration is as follows
[transforms.parse_logs]
type = "json_parser"
inputs = ["dummy_logs"]
source = '''
. = parse_json!(.message)
.message = parse_json!(.message)
'''
My purpose is to finally display on elasticsearch, there is no message field,as follows
Beta Was this translation helpful? Give feedback.
All reactions