Create fields on the fly, according to fields names and datatypes that arrives to that output plugin. #13
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Following my issue #6
I added the abilities that currently permits to me process tens different types of input data with various field's amount and datatypes without coercion and data_points defining :
1.Revoke the needs to use data_points and coerce_values configuration # to create appropriate insert to influxedb. Should be used with fields_to_skip configuration # This setting sets data points (column) names as field name from arrived to plugin event, # value for data points config :use_event_fields_for_data_points, :validate => :boolean, :default => true
config :fields_to_skip, :validate => :array, :default => []
This is my example config file: I'm retrieving different number of fields with differnt names from CPU, memory, disks, but I doesn't need defferent configuration per data type as in master branch. I'm creating relevant fields names and datatypes on filter stage and just skips the unwanted fields in outputv plugin.
input {
}
filter {
}
if [type] == "system.memory" {
split {}
grok {
match => { "message" => "\A(?<sar_time>%{HOUR}:%{MINUTE}:%{SECOND})\s+%{NUMBER:kbmemfree:float}\s+%{NUMBER:kbmemused:float}\s+%{NUMBER:percenmemused:float}\s+%{NUMBER:kbbuffers:float}\s+%{NUMBER:kbcached:float}\s+%{NUMBER:kbcommit:float}\s+%{NUMBER:kpercentcommit:float}\z" }
remove_field => [ "message" ]
add_field => {"series_name" => "%{host}.%{type}"}
}
ruby {
code => " event['kbtotalmemory'] = (event['kbmemfree'] + event['kbmemused']);event['kbnetoused'] = (event['kbmemused'] - (event['kbbuffers'] + event['kbcached']));event['kbnetofree'] = (event['kbmemfree'] + (event['kbbuffers'] + event['kbcached']))"
}
}
if [type] == "system.disks" {
split {}
grok {
match => { "message" => "\A(?<sar_time>%{HOUR}:%{MINUTE}:%{SECOND})\s+%{DATA:disk}\s+%{NUMBER:tps:float}\s+%{NUMBER:rd_sec_s:float}\s+%{NUMBER:wr_sec_s:float}\s+%{NUMBER:avgrq-sz:float}\s+%{NUMBER:avgqu-sz:float}\s+%{NUMBER:await:float}\s+%{NUMBER:svctm:float}\s+%{NUMBER:percenutil:float}\z" }
remove_field => [ "message" ]
add_field => {"series_name" => "%{host}.%{type}.%{disk}"}
}
} output {