Skip to content

Commit

Permalink
Update Collector README to add details on the decouple processor (#1046)
Browse files Browse the repository at this point in the history
* Add details on the decouple processor

* Update README.md

* Apply suggestions from code review

Co-authored-by: Nathan Slaughter <[email protected]>

---------

Co-authored-by: Nathan Slaughter <[email protected]>
  • Loading branch information
adcharre and nslaughter authored Dec 11, 2023
1 parent db694d6 commit 4f44d54
Showing 1 changed file with 67 additions and 1 deletion.
68 changes: 67 additions & 1 deletion collector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,4 +88,70 @@ from an S3 object using a CloudFormation template:
OPENTELEMETRY_COLLECTOR_CONFIG_FILE: s3://<bucket_name>.s3.<region>.amazonaws.com/collector_config.yaml
```
Loading configuration from S3 will require that the IAM role attached to your function includes read access to the relevant bucket.
Loading configuration from S3 will require that the IAM role attached to your function includes read access to the relevant bucket.
# Improving Lambda responses times
At the end of a lambda function's execution, the OpenTelemetry client libraries will flush any pending spans/metrics/logs
to the collector before returning control to the Lambda environment. The collector's pipelines are synchronous and this
means that the response of the lambda function is delayed until the data has been exported.
This delay can potentially be for hundreds of milliseconds.
To overcome this problem the [decouple](./processor/decoupleprocessor/README.md) processor can be used to separate the
two ends of the collectors pipeline and allow the lambda function to complete while ensuring that any data is exported
before the Lambda environment is frozen.
Below is a sample configuration that uses the decouple processor:
```yaml
receivers:
otlp:
protocols:
grpc:

exporters:
logging:
loglevel: debug
otlp:
endpoint: { backend endpoint }

processors:
decouple:

service:
pipelines:
traces:
receivers: [otlp]
processors: [decouple]
exporters: [logging, otlp]
```
## Reducing Lambda runtime
If your lambda function is invoked frequently it is also possible to pair the decouple processor with the batch
processor to reduce total lambda execution time at the expense of delaying the export of OpenTelemetry data.
When used with the batch processor the decouple processor must be the last processor in the pipeline to ensure that data
is successfully exported before the lambda environment is frozen.
An example use of the batch and decouple processors:
```yaml
receivers:
otlp:
protocols:
grpc:

exporters:
logging:
loglevel: debug
otlp:
endpoint: { backend endpoint }

processors:
decouple:
batch:
timeout: 5m

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, decouple]
exporters: [logging, otlp]
```

0 comments on commit 4f44d54

Please sign in to comment.