This integration uses Bitmovin Analytics API to pull in the below metrics and push them into New Relic.
Supported Metrics:
- max_concurrent_viewers
- avg_rebuffer_percentage
- cnt_play_attempts
- cnt_video_start_failures
- avg_video_startup_time_ms
- avg_video_bitrate_mbps
The Standalone environment runs the data pipelines as an independant service, either on-premises or cloud instances like AWS EC2. It can run on Linux, macOS, Windows, and any OS with support for GoLang.
- Go 1.20 or later.
Open a terminal, CD to cmd/standalone
, and run:
$ go build
The standalone environment requires a YAML file for pipeline configuration. The required keys are:
interval
: Integer. Time in seconds between requests(should be same as the schedule / cron).exporter
:nrmetrics
.bitmovin_api_key
: String. Bitmovin API Keybitmovin_license_key
: String. Bitmovin License Key.bitmovin_tenant_org
: String. Bitmovin Tenant Org.nr_account_id
: String.nr_api_key
: String. Api key for writing.nr_endpoint
: String. New Relic endpoint region. EitherUS
orEU
. Optional, default value isUS
.
Check config/example_config.yaml
for a configuration example.
Just run the following command from the build folder:
$ ./standalone path/to/config.yaml
To run the pipeline on system start, check your specific system init documentation.
The Lambda environment runs the data pipeline in AWS Lambda instances. It's located in the lambda
folder, and is divided into 3 binaries: lambda/receiver
, lambda/processor
and lambda/exporter
.
- An AWS account.
- AWS CLI tool.
- Go 1.20 or later.
- GNU Make.
- Create 3 lambdas for Receiver, Processor and Exporter. Runtime
Go 1.x
, archx86_64
, and handler namesreceiver
,processor
andexporter
. - Create an SQS for ProcessorToExporter, type Standard, condition OnSuccess.
- Open Receiver lambda config->permissions->execution role. Add another permission->create inline policy, add Lambda write permissions
InvokeAsync
andInvokeFunction
. - Edit Receiver lambda config, add as a destination another lambda, the Processor, with async invocation.
- Open Processor lambda config->permissions->execution role. Add another permission->create inline policy, add SQS write permissions.
- Edit Processor lambda config, add as a destination the SQS ProcessorToExporter.
- Open Exporter lambda config->permissions->execution role. Add another permission->create inline policy, add SQS read and write permissions.
- Edit Exporter lambda config, add as a trigger the SQS ProcessorToExporter.
Note: when creating and configuring the SQS service and trigger, make sure to set the timing and batching options you will need. For example, a time interval of 5 minutes and batching of 50 events.
Open a terminal, CD to lambda
, and run:
$ make recv=RECEIVER proc=PROCESSOR expt=EXPORTER
Where RECEIVER, PROCESSOR, and EXPORTER, are the AWS Lambda functions you just created in the previous step.
A Lambda pipeline requieres some configuration keys to be set as environment variables. To set up environment variables, go to AWS console, Lambda->Functions, click your function, Configuration->Environment variables:
Environment Variables to be set on the Receiver function:
interval
: Integer. Time in seconds between requests(should be same as the schedule / cron).exporter
:nrmetrics
.bitmovin_api_key
: String. Bitmovin API Keybitmovin_license_key
: String. Bitmovin License Key.bitmovin_tenant_org
: String. Bitmovin Tenant Org.
Environment Variables to be set on the Exporter function:
nr_account_id
: String. Account ID. Only requiered fornrevents
andnrapi
exporters.nr_api_key
: String. Api key for writing.nr_endpoint
: String. New Relic endpoint region. EitherUS
orEU
. Optional, default value isUS
.
Finally, to start running the pipeline you will need an EventBridge rule. Add a trigger for the Receiver lambda, select EventBridge as the source, create new rule, schedule expression rate(1 minute)
(or the time you desire).
Instead of running the pipeline with an EventBridge rule, you can just send async invocations to the Receiver lambda from the command line, using the following command:
$ aws lambda invoke-async --function-name RECEIVER --invoke-args INPUT.json
Where RECEIVER is the Receiver lambda name and INPUT.json is a file containing any JSON (the input event will be ignored by the receiver).
This will simulate a timer event and trigger the pipeline.