Skip to content

Log data format change between 2.1.0 and 2.2.0 #1310

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
thenodon opened this issue May 11, 2025 · 1 comment · Fixed by #1311
Closed

Log data format change between 2.1.0 and 2.2.0 #1310

thenodon opened this issue May 11, 2025 · 1 comment · Fixed by #1311
Assignees
Labels

Comments

@thenodon
Copy link

Description:
I currently doing a evaluation of parseable and just updated from 2.1.0 to 2.2.0. But when upgrading to 2.2.0 the log format is different where all my otel attributes gets located into a key called other_attributes. This change looks like it was created in commit #1298

In 2.1.0 the log data looks like this in the parseable UI:

application: "com.docker.backend"
availability_zone: "zone-1"
body: "May 10 13:31:50 homebase com.docker.backend[192162]: [11:31:50.442155604Z][vm.lifecycle-server.apiproxy] << GET /v1.46/containers/43cc5c7628cca59ccdbd36f07deb237f2bad085cf2837947a8ca749cd87f080d/json (1.495144ms)"
flags: 0
host: "homebase"
log.file.name: "syslog"
log_record_dropped_attributes_count: 0
message: "[11:31:50.442155604Z][vm.lifecycle-server.apiproxy] << GET /v1.46/containers/43cc5c7628cca59ccdbd36f07deb237f2bad085cf2837947a8ca749cd87f080d/json (1.495144ms)"
observed_time_unix_nano: "2025-05-10T11:31:50.584"
p_format: "otel-logs"
p_src_ip: "127.0.0.1"
p_user_agent: "OpenTelemetry Collector Contrib/0.125.0 (linux/amd64)"
pid: "192162"
program: "com.docker.backend"
resource_dropped_attributes_count: 0
schema_url: ""
scope_dropped_attributes_count: 0
scope_log_schema_url: ""
scope_name: ""
scope_version: ""
severity_number: 0
severity_text: "SEVERITY_NUMBER_UNSPECIFIED"
span_id: ""
time_unix_nano: "2025-05-10T11:31:50"
timestamp: "May 10 13:31:50"
timestamp_my_nano: "1746876710000000000"
trace_id: ""

but in 2.2.0 it looks like this

body: May 10 13:33:34 homebase com.docker.backend[192162]: [11:33:34.340657800Z][main.apiproxy         ] << GET /v1.46/containers/43cc5c7628cca59ccdbd36f07deb237f2bad085cf2837947a8ca749cd87f080d/json (4.17477ms)
flags: 0
host: homebase
log_record_dropped_attributes_count: 0
observed_time_unix_nano: 2025-05-10T11:33:34.384
other_attributes: {"timestamp":"May 10 13:33:34","program":"com.docker.backend","pid":"192162","message":"[11:33:34.340657800Z][main.apiproxy         ] << GET /v1.46/containers/43cc5c7628cca59ccdbd36f07deb237f2bad085cf2837947a8ca749cd87f080d/json (4.17477ms)","log.file.name":"syslog","timestamp_my_nano":"1746876814000000000","application":"com.docker.backend","availability_zone":"zone-1"}
p_format: otel-logs
p_src_ip: 127.0.0.1
p_user_agent: OpenTelemetry Collector Contrib/0.125.0 (linux/amd64)
resource_dropped_attributes_count: 0
schema_url: 
scope_dropped_attributes_count: 0
scope_log_schema_url: 
scope_name: 
scope_version: 
severity_number: 0
severity_text: SEVERITY_NUMBER_UNSPECIFIED
span_id: 
time_unix_nano: 2025-05-10T11:33:34
trace_id: 

So all the attributes I created in the otel collector like program, is now just part of the other_attributes structure.
Im not sure I understand the logic behind this change or how the query with SQL is possible in a simple way on the attributes.
This change broke all the dashboards, alerts etc that was based on doing SQL against parseable. With SQL as the query language all attributes should be possible to query based on the otel attributes, like select on, aggregate, group by etc. I think this is key to be a otel native tool.

@nitisht
Copy link
Member

nitisht commented May 11, 2025

Thanks for reporting @thenodon , we'll get back on this soon

nikhilsinhaparseable added a commit to nikhilsinhaparseable/parseable that referenced this issue May 12, 2025
keep all attributes as individual columns in the ingested event
expose env `P_OTEL_ATTRIBUTES_ALLOWED_LIMIT` to configure the allowed limit
for attributes count
if attributes count in flattened event > the allowed limit
log the error, and reject the event

Fixes: parseablehq#1310
nitisht added a commit that referenced this issue May 14, 2025
Also remove other_attributes from otel logs/traces/metrics.
We keep all attributes as individual columns in the ingested event.
If total column count in flattened event > the allowed limit
log the error, and reject the event

Fixes: #1310

Signed-off-by: Nikhil Sinha <[email protected]>
Co-authored-by: Nitish Tiwari <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants