From c0e5b4ac1fe24e8a97df2b9edf294b2f08bdb776 Mon Sep 17 00:00:00 2001 From: nbaenam Date: Mon, 22 Jan 2024 14:20:32 +0100 Subject: [PATCH] feat(Sonic): Added logs monitoring key popover --- .../new-relic-one-pricing-billing/data-ingest-billing.mdx | 2 +- .../new-relic-one-user-management/user-type.mdx | 2 +- .../docs/data-apis/understand-data/new-relic-data-types.mdx | 2 +- .../understand-use-data/kubernetes-cluster-explorer.mdx | 2 +- .../referenced-policies/security-guide.mdx | 2 +- .../new-relic-logs/logs-licenses.mdx | 2 +- .../new-relic-logs/logs-plugin-licenses.mdx | 2 +- .../logs/forward-logs/aws-firelens-plugin-log-forwarding.mdx | 2 +- .../logs/forward-logs/aws-lambda-sending-cloudwatch-logs.mdx | 2 +- .../docs/logs/forward-logs/aws-lambda-sending-logs-s3.mdx | 2 +- .../logs/forward-logs/aws-lambda-sending-security-logs-s3.mdx | 2 +- src/content/docs/logs/forward-logs/azure-log-forwarding.mdx | 2 +- .../logs/forward-logs/enable-log-management-new-relic.mdx | 2 +- .../logs/forward-logs/fluent-bit-plugin-log-forwarding.mdx | 2 +- .../docs/logs/forward-logs/fluentd-plugin-log-forwarding.mdx | 2 +- .../forward-your-logs-using-infrastructure-agent.mdx | 2 +- src/content/docs/logs/forward-logs/heroku-log-forwarding.mdx | 2 +- .../logs/forward-logs/kubernetes-plugin-log-forwarding.mdx | 2 +- .../docs/logs/forward-logs/logstash-plugin-log-forwarding.mdx | 2 +- .../docs/logs/forward-logs/mux-video-streaming-firehose.mdx | 4 ++-- .../forward-logs/stream-logs-using-kinesis-data-firehose.mdx | 2 +- .../logs/forward-logs/vector-output-sink-log-forwarding.mdx | 2 +- .../docs/logs/get-started/get-started-log-management.mdx | 2 +- src/content/docs/logs/get-started/logging-best-practices.mdx | 2 +- .../new-relics-log-management-security-privacy.mdx | 2 +- src/content/docs/logs/log-api/introduction-log-api.mdx | 2 +- .../docs/logs/logs-context/get-started-logs-context.mdx | 2 +- src/content/docs/logs/ui-data/obfuscation-ui.mdx | 2 +- src/content/docs/logs/ui-data/timestamp-support.mdx | 2 +- .../opentelemetry/opentelemetry-introduction.mdx | 2 +- .../opentelemetry/opentelemetry-troubleshooting.mdx | 2 +- src/content/docs/new-relic-solutions/get-started/glossary.mdx | 2 +- .../docs/new-relic-solutions/get-started/intro-new-relic.mdx | 2 +- .../data-governance-optimize-ingest-guide.mdx | 2 +- .../diagnostics-beginner-guide.mdx | 2 +- .../update-serverless-monitoring-aws-lambda.mdx | 2 +- .../capitalization/product-capability-feature-usage.mdx | 2 +- .../voice-strategies-docs-sound-new-relic.mdx | 4 ++-- .../get-started-managing-large-logs.mdx | 2 +- .../tutorial-manage-large-log-volume/organize-large-logs.mdx | 2 +- .../tutorial-optimize-telemetry/data-optimize-techniques.mdx | 2 +- 41 files changed, 43 insertions(+), 43 deletions(-) diff --git a/src/content/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing.mdx b/src/content/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing.mdx index 7c6496ade80..8d3390e70d8 100644 --- a/src/content/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing.mdx +++ b/src/content/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing.mdx @@ -201,7 +201,7 @@ Here's a table with comparisons of the two options. Prices and limits are monthl [Automatically](/docs/logs/get-started/new-relics-log-management-security-privacy/#log-obfuscation) mask known credit card and Social Security number patterns in logs. - Create and track rules directly in the log management UI, and [mask or hash](/docs/logs/ui-data/obfuscation-ui/) sensitive log data. + Create and track rules directly in the UI, and [mask or hash](/docs/logs/ui-data/obfuscation-ui/) sensitive log data. diff --git a/src/content/docs/accounts/accounts-billing/new-relic-one-user-management/user-type.mdx b/src/content/docs/accounts/accounts-billing/new-relic-one-user-management/user-type.mdx index 97998a8972f..0b2a832ae19 100644 --- a/src/content/docs/accounts/accounts-billing/new-relic-one-user-management/user-type.mdx +++ b/src/content/docs/accounts/accounts-billing/new-relic-one-user-management/user-type.mdx @@ -742,7 +742,7 @@ Details about access to the options available on [our Instant Observability page id="logs-capabilities" title="Log management access" > -Details on log management feature access by user type: +Details on feature access by user type: diff --git a/src/content/docs/data-apis/understand-data/new-relic-data-types.mdx b/src/content/docs/data-apis/understand-data/new-relic-data-types.mdx index faf49f7c718..223f284a9ec 100644 --- a/src/content/docs/data-apis/understand-data/new-relic-data-types.mdx +++ b/src/content/docs/data-apis/understand-data/new-relic-data-types.mdx @@ -746,7 +746,7 @@ A log is a message about a system used to understand the activity of the system ### Logs at New Relic [#logs-new-relic] -Our [log management](/docs/logs/get-started/get-started-log-management) capabilities give you a centralized platform that connects your log data with other New Relic-monitored data. For example, you can [see logs alongside your APM data](/docs/logs/logs-context/logs-in-context). +Our [](/docs/logs/get-started/get-started-log-management) capabilities give you a centralized platform that connects your log data with other New Relic-monitored data. For example, you can [see logs alongside your APM data](/docs/logs/logs-context/logs-in-context). In New Relic, log data is reported with multiple [attributes](/docs/using-new-relic/welcome-new-relic/get-started/glossary#attribute) (key-value data) attached. To query your log data, you could use a NRQL query like: diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer.mdx index cb81bdf2402..03b5818c4e2 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer.mdx @@ -334,7 +334,7 @@ For each pod, depending on the [integrations and features you've enabled](/docs/ * Active alerts (both warning and critical) * [Kubernetes events that happened in that pod](/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration) * APM data and traces (if you've [linked your APM data](/docs/integrations/kubernetes-integration/link-your-applications/link-your-applications-kubernetes)) -* A link to the pods' and containers' logs, collected using the [Kubernetes plugin for log management in New Relic](/docs/logs/enable-logs/enable-logs/kubernetes-plugin-logs) +* A link to the pods' and containers' logs, collected using the [Kubernetes plugin for in New Relic](/docs/logs/enable-logs/enable-logs/kubernetes-plugin-logs) Cluster and [control plane](/docs/integrations/kubernetes-integration/installation/configure-control-plane-monitoring) statistics are always visible on the left side. diff --git a/src/content/docs/licenses/license-information/referenced-policies/security-guide.mdx b/src/content/docs/licenses/license-information/referenced-policies/security-guide.mdx index b819e6ce7c9..354c219b907 100644 --- a/src/content/docs/licenses/license-information/referenced-policies/security-guide.mdx +++ b/src/content/docs/licenses/license-information/referenced-policies/security-guide.mdx @@ -35,7 +35,7 @@ New Relic provides its customers controls of their data as follows: * New Relic's customers can use any number of methods to send data to New Relic's APIs, such as (1) using New Relic's software, (2) using vendor-neutral software that is managed and maintained by a third-party (e.g., [OpenTelemetry instrumentation](https://docs.newrelic.com/docs/integrations/open-source-telemetry-integrations/opentelemetry/introduction-opentelemetry-new-relic/#benefits) provided by [opentelemetry.io](opentelemetry.io), or (3) from third-party systems that customers manage and/or control. * New Relic's customers can use New Relic's Services such as NerdGraph to filter out and drop data. See [Drop data using nerdgraph](https://docs.newrelic.com/docs/telemetry-data-platform/manage-data/drop-data-using-nerdgraph/). * New Relic's customers can adjust their data retention periods as appropriate for their needs. See [Adjust retention](https://docs.newrelic.com/docs/telemetry-data-platform/manage-data/manage-data-retention/#adjust-retention). -* New Relic's log management capabilities obfuscate numbers that match known patterns, such as bank card and social security numbers as described in our [log management security documentation](https://docs.newrelic.com/docs/logs/log-management/get-started/new-relics-log-management-security-privacy/). Customers that meet certain requirements can obfuscate their data as described [here](https://docs.newrelic.com/docs/logs/ui-data/obfuscation-ui/). +* New Relic's capabilities obfuscate numbers that match known patterns, such as bank card and social security numbers as described in our [log management security documentation](https://docs.newrelic.com/docs/logs/log-management/get-started/new-relics-log-management-security-privacy/). Customers that meet certain requirements can obfuscate their data as described [here](https://docs.newrelic.com/docs/logs/ui-data/obfuscation-ui/). * New Relic honors requests to delete personal data in accordance with applicable privacy laws. Please see [https://docs.newrelic.com/docs/security/security-privacy/data-privacy/data-privacy-new-relic/](https://docs.newrelic.com/docs/security/security-privacy/data-privacy/data-privacy-new-relic/). * Customers may use New Relic's APIs to query data, such as NerdGraph described [here](https://docs.newrelic.com/docs/apis/nerdgraph/examples/nerdgraph-nrql-tutorial/), and New Relic Services to export the data to other cloud providers. Customers that meet certain requirements can export their data as described [here](https://docs.newrelic.com/docs/apis/nerdgraph/examples/nerdgraph-streaming-export/) and [here](https://docs.newrelic.com/docs/apis/nerdgraph/examples/nerdgraph-historical-data-export/). * Customers can configure their log forwarder; see [this](https://docs.newrelic.com/docs/logs/enable-log-management-new-relic/enable-log-monitoring-new-relic/forward-your-logs-using-infrastructure-agent/) before sending infrastructure logs to New Relic. diff --git a/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-licenses.mdx b/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-licenses.mdx index 7985c1a782d..1aeeb315d66 100644 --- a/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-licenses.mdx +++ b/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-licenses.mdx @@ -9,7 +9,7 @@ redirects: freshnessValidatedDate: never --- -We love open-source software, and use the following with our [log management capabilities in New Relic](/docs/logs/get-started/get-started-log-management/). Thank you, open-source community, for making these fine tools! Some of these are listed under multiple software licenses, and in that case we have listed the license we've chosen to use. +We love open-source software, and use the following with our [ capabilities in New Relic](/docs/logs/get-started/get-started-log-management/). Thank you, open-source community, for making these fine tools! Some of these are listed under multiple software licenses, and in that case we have listed the license we've chosen to use. For a list of the licenses used for the log forwarders used with New Relic, see [Logs plugin licenses](/docs/licenses/product-or-service-licenses/new-relic-logs/logs-plugin-licenses/). diff --git a/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-plugin-licenses.mdx b/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-plugin-licenses.mdx index b4ff175ee18..65ede4c620a 100644 --- a/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-plugin-licenses.mdx +++ b/src/content/docs/licenses/product-or-service-licenses/new-relic-logs/logs-plugin-licenses.mdx @@ -9,7 +9,7 @@ redirects: freshnessValidatedDate: never --- -We love open-source software, and use the following in the log forwarder plugins that can be used with our [log management capabilities in New Relic](/docs/logs/get-started/get-started-log-management/). Thank you, open-source community, for making these fine tools! Some of these are listed under multiple software licenses, and in that case we have listed the license we've chosen to use. +We love open-source software, and use the following in the log forwarder plugins that can be used with our [ capabilities in New Relic](/docs/logs/get-started/get-started-log-management/). Thank you, open-source community, for making these fine tools! Some of these are listed under multiple software licenses, and in that case we have listed the license we've chosen to use. For a list of the licenses used for log management in New Relic, see [Logs licenses](/docs/licenses/product-or-service-licenses/new-relic-logs/logs-licenses/). diff --git a/src/content/docs/logs/forward-logs/aws-firelens-plugin-log-forwarding.mdx b/src/content/docs/logs/forward-logs/aws-firelens-plugin-log-forwarding.mdx index 7610204bc18..2f693b8435b 100644 --- a/src/content/docs/logs/forward-logs/aws-firelens-plugin-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/aws-firelens-plugin-log-forwarding.mdx @@ -17,7 +17,7 @@ freshnessValidatedDate: never If your log data is already being monitored by [AWS FireLens](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_firelens.html), you can use our FireLens integration to forward and enrich your log data in New Relic. This integration is built on our Fluent Bit output plugin. -Forwarding your FireLens logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your FireLens logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Basic process [#compatibility-requirements] diff --git a/src/content/docs/logs/forward-logs/aws-lambda-sending-cloudwatch-logs.mdx b/src/content/docs/logs/forward-logs/aws-lambda-sending-cloudwatch-logs.mdx index 50f61683a9a..d301ffe6ce4 100644 --- a/src/content/docs/logs/forward-logs/aws-lambda-sending-cloudwatch-logs.mdx +++ b/src/content/docs/logs/forward-logs/aws-lambda-sending-cloudwatch-logs.mdx @@ -18,7 +18,7 @@ freshnessValidatedDate: never You can send your [Amazon CloudWatch logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to New Relic using our AWS Lambda function, `newrelic-log-ingestion`. This can be easily deployed from the AWS Serverless application repository. -Forwarding your CloudWatch logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your CloudWatch logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Install and configure the CloudWatch logs Lambda function [#install-function] diff --git a/src/content/docs/logs/forward-logs/aws-lambda-sending-logs-s3.mdx b/src/content/docs/logs/forward-logs/aws-lambda-sending-logs-s3.mdx index 8ef4c23c1a8..882820d34ff 100644 --- a/src/content/docs/logs/forward-logs/aws-lambda-sending-logs-s3.mdx +++ b/src/content/docs/logs/forward-logs/aws-lambda-sending-logs-s3.mdx @@ -18,7 +18,7 @@ import serverlessAWSLambdaSelectRegion from 'images/serverless_screenshot-crop_A You can send your [Amazon S3 buckets](https://aws.amazon.com/s3/) to New Relic using our AWS Lambda function, `NewRelic-log-ingestion-s3`. This can be easily deployed from the AWS Serverless application repository. -Forwarding logs from your S3 bucket to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding logs from your S3 bucket to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Install the Lambda function [#install-function] diff --git a/src/content/docs/logs/forward-logs/aws-lambda-sending-security-logs-s3.mdx b/src/content/docs/logs/forward-logs/aws-lambda-sending-security-logs-s3.mdx index 3bf79b36966..a2bc222bd77 100644 --- a/src/content/docs/logs/forward-logs/aws-lambda-sending-security-logs-s3.mdx +++ b/src/content/docs/logs/forward-logs/aws-lambda-sending-security-logs-s3.mdx @@ -14,7 +14,7 @@ import serverlessAWSLambdaSelectRegion from 'images/serverless_screenshot-crop_A You can send your [Amazon Security Lake logs](https://docs.aws.amazon.com/security-lake/latest/userguide/internal-sources.html) to New Relic using Serverless app `newrelic-securitylake-s3-processor-LogForwarder`, which can be easily deployed from [AWS Serverless application repository](https://serverlessrepo.aws.amazon.com/applications). -Forwarding logs from your Security Lake to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding logs from your Security Lake to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Install the Serverless application [#install-app] diff --git a/src/content/docs/logs/forward-logs/azure-log-forwarding.mdx b/src/content/docs/logs/forward-logs/azure-log-forwarding.mdx index 7da9554b5c9..f255fe819c5 100644 --- a/src/content/docs/logs/forward-logs/azure-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/azure-log-forwarding.mdx @@ -31,7 +31,7 @@ import enforceHttpsFunctionApp from 'images/enforce-https-function-app.webp' If your logs are already being collected in Azure, you can use our [Microsoft Azure Resource Manager (ARM)](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview) templates to forward and enrich them in New Relic. -Forwarding your Azure logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your Azure logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. We currently offer two ARM templates to achieve this: the EventHub-based (recommended) and the Blob Storage-based templates. diff --git a/src/content/docs/logs/forward-logs/enable-log-management-new-relic.mdx b/src/content/docs/logs/forward-logs/enable-log-management-new-relic.mdx index a521eef417f..444c3a098f7 100644 --- a/src/content/docs/logs/forward-logs/enable-log-management-new-relic.mdx +++ b/src/content/docs/logs/forward-logs/enable-log-management-new-relic.mdx @@ -28,7 +28,7 @@ import logsNRLogsinContext from 'images/logs_diagram_NR-logs-in-context.webp' import logsLogForwardOptions from 'images/logs_diagram_log-forward-options.webp' -Our log management capabilities help you to collect, process, explore, query, and alert on your log data. To get your logs into New Relic, you can use any of these options: +Our capabilities help you to collect, process, explore, query, and alert on your log data. To get your logs into New Relic, you can use any of these options: capabilities to collect, process, explore, query, and alert on your log data. ## Basic process [#compatibility-requirements] diff --git a/src/content/docs/logs/forward-logs/fluentd-plugin-log-forwarding.mdx b/src/content/docs/logs/forward-logs/fluentd-plugin-log-forwarding.mdx index e5bd6d813b6..4b6f46d41fc 100644 --- a/src/content/docs/logs/forward-logs/fluentd-plugin-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/fluentd-plugin-log-forwarding.mdx @@ -18,7 +18,7 @@ freshnessValidatedDate: never If your log data is already being monitored by [Fluentd](https://www.fluentd.org), you can use our Fluentd integration to forward and enrich your log data in New Relic. -Forwarding your Fluentd logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your Fluentd logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Basic process [#enable-process] diff --git a/src/content/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent.mdx b/src/content/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent.mdx index 3bf7594fe7b..95e0e935c0c 100644 --- a/src/content/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent.mdx +++ b/src/content/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent.mdx @@ -464,7 +464,7 @@ Although these configuration parameters aren't required, we still recommend you The `attributes` configuration parameter does not add custom attributes to logs forwarded via external Fluent Bit configuration (for example, using the `fluentbit` configuration parameter). In this scenario, you should refer to the `record_modifier` option in the [Fluent Bit documentation](https://docs.fluentbit.io/manual/). - One common use of the `attributes` configuration parameter is to specify the `logtype` attribute. This attribute allows leveraging one of the [built-in parsing rules](/docs/logs/log-management/ui-data/parsing/#built-in-rules) supported by New Relic's log management capabilities. + One common use of the `attributes` configuration parameter is to specify the `logtype` attribute. This attribute allows leveraging one of the [built-in parsing rules](/docs/logs/log-management/ui-data/parsing/#built-in-rules) supported by New Relic's capabilities. **Example:** diff --git a/src/content/docs/logs/forward-logs/heroku-log-forwarding.mdx b/src/content/docs/logs/forward-logs/heroku-log-forwarding.mdx index f5f77b22037..56dc2066c2d 100644 --- a/src/content/docs/logs/forward-logs/heroku-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/heroku-log-forwarding.mdx @@ -13,7 +13,7 @@ freshnessValidatedDate: never If your log data is already being monitored by Heroku's built-in [Logplex](https://devcenter.heroku.com/articles/logplex) router, you can use our integration to forward and enrich your log data in New Relic. -Forwarding your Heroku logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your Heroku logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. We currently support [Heroku HTTPS drains](https://devcenter.heroku.com/articles/log-drains#https-drains) and [Heroku Syslog drains](https://devcenter.heroku.com/articles/log-drains#syslog-drains). diff --git a/src/content/docs/logs/forward-logs/kubernetes-plugin-log-forwarding.mdx b/src/content/docs/logs/forward-logs/kubernetes-plugin-log-forwarding.mdx index 4a2a2d862b4..227b729a925 100644 --- a/src/content/docs/logs/forward-logs/kubernetes-plugin-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/kubernetes-plugin-log-forwarding.mdx @@ -16,7 +16,7 @@ redirects: freshnessValidatedDate: never --- -New Relic's Kubernetes plugin for log forwarding simplifies sending logs from your cluster to New Relic logs. It uses a standalone Docker image and runs as a DaemonSet, seamlessly collecting logs for centralized analysis and troubleshooting. Forwarding your Kubernetes logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +New Relic's Kubernetes plugin for log forwarding simplifies sending logs from your cluster to New Relic logs. It uses a standalone Docker image and runs as a DaemonSet, seamlessly collecting logs for centralized analysis and troubleshooting. Forwarding your Kubernetes logs to New Relic will give you enhanced l capabilities to collect, process, explore, query, and alert on your log data. ## Enable Kubernetes for log management [#enable-process] diff --git a/src/content/docs/logs/forward-logs/logstash-plugin-log-forwarding.mdx b/src/content/docs/logs/forward-logs/logstash-plugin-log-forwarding.mdx index a8a0d4c5488..9eb65b4340c 100644 --- a/src/content/docs/logs/forward-logs/logstash-plugin-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/logstash-plugin-log-forwarding.mdx @@ -17,7 +17,7 @@ freshnessValidatedDate: never If your log data is already being monitored by [Logstash](https://www.elastic.co/products/logstash), you can use our Logstash plugin to forward and enrich your log data in New Relic. -Forwarding your Logstash logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your Logstash logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Enable Logstash for log management [#enable-process] diff --git a/src/content/docs/logs/forward-logs/mux-video-streaming-firehose.mdx b/src/content/docs/logs/forward-logs/mux-video-streaming-firehose.mdx index 0801692fe37..c717852791b 100644 --- a/src/content/docs/logs/forward-logs/mux-video-streaming-firehose.mdx +++ b/src/content/docs/logs/forward-logs/mux-video-streaming-firehose.mdx @@ -27,11 +27,11 @@ If everything is configured correctly and your data is being collected, you shou * Our [logs UI](https://one.newrelic.com/launcher/logger.log-launcher) * Our [NRQL query tools](/docs/chart-builder/use-chart-builder/choose-data/use-advanced-nrql-mode-specify-data). For example, you can execute a query like this: - ``` + ```sql SELECT * FROM Log ``` -If no data appears after you enable our log management capabilities, follow our [standard log troubleshooting procedures](/docs/logs/log-management/troubleshooting/no-log-data-appears-ui/). +If no data appears after you enable our capabilities, follow our [standard log troubleshooting procedures](/docs/logs/log-management/troubleshooting/no-log-data-appears-ui/). ## What's next? [#what-next] diff --git a/src/content/docs/logs/forward-logs/stream-logs-using-kinesis-data-firehose.mdx b/src/content/docs/logs/forward-logs/stream-logs-using-kinesis-data-firehose.mdx index 64664b47c4a..1d7630955c5 100644 --- a/src/content/docs/logs/forward-logs/stream-logs-using-kinesis-data-firehose.mdx +++ b/src/content/docs/logs/forward-logs/stream-logs-using-kinesis-data-firehose.mdx @@ -21,7 +21,7 @@ import logsAWSKinesisFirehoseBufferHints from 'images/logs_screenshot-crop_AWS-K If your log data is already being monitored by [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html), you can use our Kinesis Data Firehose integration to forward and enrich your log data in New Relic. Kinesis Data Firehose is a service that can stream data in real time to a variety of destinations, including our platform. -Forwarding your CloudWatch Logs or other logs compatible with a Kinesis stream to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your CloudWatch Logs or other logs compatible with a Kinesis stream to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Create the delivery stream for New Relic [#create-delivery-stream] diff --git a/src/content/docs/logs/forward-logs/vector-output-sink-log-forwarding.mdx b/src/content/docs/logs/forward-logs/vector-output-sink-log-forwarding.mdx index 8572f49c9f9..509f088b278 100644 --- a/src/content/docs/logs/forward-logs/vector-output-sink-log-forwarding.mdx +++ b/src/content/docs/logs/forward-logs/vector-output-sink-log-forwarding.mdx @@ -14,7 +14,7 @@ freshnessValidatedDate: never If your log data is already being monitored by [Vector](https://vector.dev/), you can use our [Vector output sink](https://vector.dev/docs/reference/configuration/sinks/new_relic/) to forward and enrich your log data in New Relic. -Forwarding your Vector logs to New Relic will give you enhanced log management capabilities to collect, process, explore, query, and alert on your log data. +Forwarding your Vector logs to New Relic will give you enhanced capabilities to collect, process, explore, query, and alert on your log data. ## Configure the Vector logs sink for New Relic [#configure-sink] diff --git a/src/content/docs/logs/get-started/get-started-log-management.mdx b/src/content/docs/logs/get-started/get-started-log-management.mdx index 236b2e4044b..33776c0fff4 100644 --- a/src/content/docs/logs/get-started/get-started-log-management.mdx +++ b/src/content/docs/logs/get-started/get-started-log-management.mdx @@ -29,7 +29,7 @@ import logsMainLogsUi from 'images/logs_screenshot-full_main-logs-ui.webp' As applications move towards the cloud, microservices architecture is becoming more dispersed, making the ability to monitor logs essential. New Relic offers a fast, scalable log management platform so you can connect your logs with the rest of your telemetry and infrastructure data in a single place. -Our log management solution provides deeper visibility into application and infrastructure performance data (events, errors, traces, and more) to reduce mean time to resolution (MTTR) and quickly troubleshoot production incidents. +Our solution provides deeper visibility into application and infrastructure performance data (events, errors, traces, and more) to reduce mean time to resolution (MTTR) and quickly troubleshoot production incidents. ## Find problems faster, reduce context switching [#logs-definition] diff --git a/src/content/docs/logs/get-started/logging-best-practices.mdx b/src/content/docs/logs/get-started/logging-best-practices.mdx index 4329da4d9e8..81a690882da 100644 --- a/src/content/docs/logs/get-started/logging-best-practices.mdx +++ b/src/content/docs/logs/get-started/logging-best-practices.mdx @@ -148,4 +148,4 @@ For parsing logs we recommend that you: ## What's next? -See [Get started with log management](/docs/logs/get-started/get-started-log-management/). +See [Get started with ](/docs/logs/get-started/get-started-log-management/). diff --git a/src/content/docs/logs/get-started/new-relics-log-management-security-privacy.mdx b/src/content/docs/logs/get-started/new-relics-log-management-security-privacy.mdx index 817b0979c8d..a6f25bae5a3 100644 --- a/src/content/docs/logs/get-started/new-relics-log-management-security-privacy.mdx +++ b/src/content/docs/logs/get-started/new-relics-log-management-security-privacy.mdx @@ -15,7 +15,7 @@ redirects: freshnessValidatedDate: never --- -With our log management solution, you have direct control over what data is reported to New Relic. To ensure data privacy, and to limit the types of information New Relic receives, no customer data is captured except what you supply in API calls or log forwarder configuration. All data for the logs service is then reported to New Relic over HTTPS. +With our solution, you have direct control over what data is reported to New Relic. To ensure data privacy, and to limit the types of information New Relic receives, no customer data is captured except what you supply in API calls or log forwarder configuration. All data for the logs service is then reported to New Relic over HTTPS. This document describes additional security considerations for your logging data. For more information about New Relic's security measures: diff --git a/src/content/docs/logs/log-api/introduction-log-api.mdx b/src/content/docs/logs/log-api/introduction-log-api.mdx index b4b487c5d60..3b79ffdd36b 100644 --- a/src/content/docs/logs/log-api/introduction-log-api.mdx +++ b/src/content/docs/logs/log-api/introduction-log-api.mdx @@ -66,7 +66,7 @@ To send log data to your New Relic account via the Log API: 6. Generate some traffic and wait a few minutes, then [check your account](#what-next) for data. -If no data appears after you enable our log management capabilities, follow our [troubleshooting procedures](/docs/logs/log-management/troubleshooting/no-log-data-appears-ui/). +If no data appears after you enable our capabilities, follow our [troubleshooting procedures](/docs/logs/log-management/troubleshooting/no-log-data-appears-ui/). ## HTTP headers [#json-headers] diff --git a/src/content/docs/logs/logs-context/get-started-logs-context.mdx b/src/content/docs/logs/logs-context/get-started-logs-context.mdx index af3e5fb25e4..4bb329660e7 100644 --- a/src/content/docs/logs/logs-context/get-started-logs-context.mdx +++ b/src/content/docs/logs/logs-context/get-started-logs-context.mdx @@ -19,7 +19,7 @@ import apmLogsContextPatternsTwo from 'images/apm_screenshot-crop_logs-context-p import apmLogsCroppedUi from 'images/apm_screenshot-crop_logs-cropped-ui.webp' -There are [several ways to report your logs to New Relic](/docs/logs/get-started/get-started-log-management). Using our APM agents is one popular way, especially for smaller teams and DevOps teams that value the benefit of not having to use any other log management tools. +There are [several ways to report your logs to New Relic](/docs/logs/get-started/get-started-log-management). Using our APM agents is one popular way, especially for smaller teams and DevOps teams that value the benefit of not having to use any other tools. Got lots of logs? Check out our [tutorial on how to optimize and manage them](/docs/tutorial-large-logs/get-started-managing-large-logs/). diff --git a/src/content/docs/logs/ui-data/obfuscation-ui.mdx b/src/content/docs/logs/ui-data/obfuscation-ui.mdx index ba1703cd7c8..4669756f908 100644 --- a/src/content/docs/logs/ui-data/obfuscation-ui.mdx +++ b/src/content/docs/logs/ui-data/obfuscation-ui.mdx @@ -19,7 +19,7 @@ Our log obfuscation feature is available as part of our [Data Plus option](/docs ## What is log obfuscation? [#overview] -Our [log management service](/docs/logs/get-started/get-started-log-management) automatically masks number patterns that we identify as likely being sensitive items, such as credit card or Social Security numbers. +Our [ service](/docs/logs/get-started/get-started-log-management) automatically masks number patterns that we identify as likely being sensitive items, such as credit card or Social Security numbers. If you need additional obfuscation, one option is to adjust the configuration of the log forwarder you use (for example, our infrastructure agent). But an easier option is to use our log obfuscation feature, available with [Data Plus](/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing#data-prices). This feature lets you set up log obfuscation rules directly from the log management UI, or via our NerdGraph API, without lengthy manual configuration. You'll define regular expressions matching your sensitive information, and then create rules to obfuscate that data. You can choose either to have sensitive information masked or hashed. diff --git a/src/content/docs/logs/ui-data/timestamp-support.mdx b/src/content/docs/logs/ui-data/timestamp-support.mdx index 976cbeff53f..8280031874c 100644 --- a/src/content/docs/logs/ui-data/timestamp-support.mdx +++ b/src/content/docs/logs/ui-data/timestamp-support.mdx @@ -122,7 +122,7 @@ Timestamps are converted to Unix epoch milliseconds and stored internally as a l * Inside the [simplified set of attributes](/docs/logs/log-api/introduction-log-api/#simple-json) of the JSON body message when sending a single JSON object. * Inside the [common](/docs/logs/log-api/introduction-log-api/#json-common) object in the detailed set of attributes of the JSON body message when sending one or more JSON objects. The timestamp applies to all log messages of this JSON. * Inside each log message in the [logs](/docs/logs/log-api/introduction-log-api/#json-logs) object in the detailed set of attributes of the JSON body message when sending one or more JSON objects. The timestamps only apply to that log message. -* Inside the “message” JSON field when it is a valid JSON message. Our log management capabilities will parse any message attribute as JSON. The resulting JSON attributes in the parsed message will be added to the log. +* Inside the “message” JSON field when it is a valid JSON message. Our capabilities will parse any message attribute as JSON. The resulting JSON attributes in the parsed message will be added to the log. Here are some examples of JSON logs with a valid `timestamp` attribute that override the ingest `timestamp`: diff --git a/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-introduction.mdx b/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-introduction.mdx index 8072b7181f9..d248ee54874 100644 --- a/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-introduction.mdx +++ b/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-introduction.mdx @@ -344,7 +344,7 @@ Here are the OpenTelemetry data types we support and their associated mappings. New Relic offers support for the OTLP ingest of log signals. The maturity of the upstream specification is [stable](https://github.com/open-telemetry/opentelemetry-specification/blob/87a5ed7f0d4c403e2b336f275ce3e7fd66a8041b/specification/versioning-and-stability.md#stable). -OpenTelemetry logs are compatible with logs in New Relic. The OpenTelemetry logs optionally include attributes (name-value pairs) and resource attributes that map directly to dimensions you can use to facet or filter log data with queries. OpenTelemetry log metadata (for example, `name`, `severity_text`, and `trace_id`) also map directly to dimensions on New Relic's log management capabilities. We currently support all OpenTelemetry log message types. +OpenTelemetry logs are compatible with logs in New Relic. The OpenTelemetry logs optionally include attributes (name-value pairs) and resource attributes that map directly to dimensions you can use to facet or filter log data with queries. OpenTelemetry log metadata (for example, `name`, `severity_text`, and `trace_id`) also map directly to dimensions on New Relic's capabilities. We currently support all OpenTelemetry log message types. For more details, see the [logs information in our best practices guide](/docs/integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-concepts#logs). diff --git a/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-troubleshooting.mdx b/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-troubleshooting.mdx index ffd8067c634..cb8ae4b45a6 100644 --- a/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-troubleshooting.mdx +++ b/src/content/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-troubleshooting.mdx @@ -82,7 +82,7 @@ var resource = Resource.getDefault() Depending on the SDK, you may also set the `service.name` by declaring it in the `OTEL_RESOURCE_ATTRIBUTES` or `OTEL_SERVICE_NAME` [environment variables](https://github.com/open-telemetry/opentelemetry-specification/blob/20c82de552d08428e8cadaaef3e6cb46812f7c00/specification/sdk-environment-variables.md#general-sdk-configuration). -For log management, you can use a structured log template to inject the `service.name`. See [Logs in context with Log4j2](https://github.com/newrelic/newrelic-opentelemetry-examples/blob/e3f5ee85b4dcd8dd29f8f69d78d122b82a9638ba/other-examples/java/logs-in-context-log4j2/Log4j2EventLayout.json#L2) for an example. +For , you can use a structured log template to inject the `service.name`. See [Logs in context with Log4j2](https://github.com/newrelic/newrelic-opentelemetry-examples/blob/e3f5ee85b4dcd8dd29f8f69d78d122b82a9638ba/other-examples/java/logs-in-context-log4j2/Log4j2EventLayout.json#L2) for an example. For more OpenTelemetry examples with New Relic, visit the [newrelic-opentelemetry-examples](https://github.com/newrelic/newrelic-opentelemetry-examples) repository on GitHub. diff --git a/src/content/docs/new-relic-solutions/get-started/glossary.mdx b/src/content/docs/new-relic-solutions/get-started/glossary.mdx index 0481761ea31..6c85cb4d4fb 100644 --- a/src/content/docs/new-relic-solutions/get-started/glossary.mdx +++ b/src/content/docs/new-relic-solutions/get-started/glossary.mdx @@ -704,7 +704,7 @@ To learn how New Relic uses events, see [New Relic data types](/docs/data-apis/u id="log" title="log" > - A **log** is a message about a system used to understand the activity of the system and to diagnose problems. For more information on how we use log data, see [Log management](/docs/logs/new-relic-logs/get-started/introduction-new-relic-logs). + A **log** is a message about a system used to understand the activity of the system and to diagnose problems. For more information on how we use log data, see [](/docs/logs/new-relic-logs/get-started/introduction-new-relic-logs). -We offer a fast, scalable log management platform so you can connect your logs with the rest of your telemetry and infrastructure data in a single place. +We offer a fast, scalable platform so you can connect your logs with the rest of your telemetry and infrastructure data in a single place. * Enable log management with [APM logs in context](/docs/apm/new-relic-apm/getting-started/get-started-logs-context), our [infrastructure agent](/docs/logs/forward-logs/forward-your-logs-using-infrastructure-agent/), or other [log forwarding solutions](/docs/logs/forward-logs/enable-log-management-new-relic/). * Explore relevant [log data across your platform](/docs/apm/new-relic-apm/getting-started/get-started-logs-context#response-time-example), including errors, distributed traces, hosts, and more. diff --git a/src/content/docs/new-relic-solutions/observability-maturity/operational-efficiency/data-governance-optimize-ingest-guide.mdx b/src/content/docs/new-relic-solutions/observability-maturity/operational-efficiency/data-governance-optimize-ingest-guide.mdx index 0ac3bf65b67..211b85c00e5 100644 --- a/src/content/docs/new-relic-solutions/observability-maturity/operational-efficiency/data-governance-optimize-ingest-guide.mdx +++ b/src/content/docs/new-relic-solutions/observability-maturity/operational-efficiency/data-governance-optimize-ingest-guide.mdx @@ -1133,7 +1133,7 @@ This field works in a way similar to `grep -E` in Unix systems. For example, for If you have pre-written Fluentd configurations for Fluentbit that do valuable filtering or parsing, you can import them into our New Relic logging config. To do this, use the `config_file` and `parsers` parameters in any `.yaml` file in your `logging.d` folder: -* `config_file`: path to an existing Fluent Bit configuration file. Any overlapping source results in duplicate messages in New Relic's log management. +* `config_file`: path to an existing Fluent Bit configuration file. Any overlapping source results in duplicate messages in New Relic's . * `parsers_file`: path to an existing Fluent Bit parsers file. The following parser names are reserved: `rfc3164`, `rfc3164-local` and `rfc5424`. diff --git a/src/content/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/diagnostics-beginner-guide.mdx b/src/content/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/diagnostics-beginner-guide.mdx index 973dfa554cf..3f427a417a5 100644 --- a/src/content/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/diagnostics-beginner-guide.mdx +++ b/src/content/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/diagnostics-beginner-guide.mdx @@ -29,7 +29,7 @@ Here are some requirements and some recommendations for using this guide: * **Required** : [APM with distributed tracing](/docs/apm/apm-ui-pages/monitoring/apm-summary-page-view-transaction-apdex-usage-data), [APM logs in context](/docs/apm/new-relic-apm/getting-started/get-started-logs-context), and [infrastructure agent](/docs/infrastructure/infrastructure-monitoring/get-started/get-started-infrastructure-monitoring) * **Recommended**: [Logs](/docs/logs/get-started/get-started-log-management) and [network monitoring](/docs/network-performance-monitoring/get-started/npm-introduction) (NPM) * **Required**: [Service level management](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/optimize-slm-guide) -* **Recommended**: Some experience with using New Relic APM, distributed tracing, NRQL querying, and log management UI +* **Recommended**: Some experience with using New Relic APM, distributed tracing, NRQL querying, and UI * **Recommended**: you've read these guides: * [Alert quality management](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/alert-quality-management-guide) * [Service level management](/docs/new-relic-solutions/observability-maturity/uptime-performance-reliability/optimize-slm-guide) diff --git a/src/content/docs/serverless-function-monitoring/aws-lambda-monitoring/enable-lambda-monitoring/update-serverless-monitoring-aws-lambda.mdx b/src/content/docs/serverless-function-monitoring/aws-lambda-monitoring/enable-lambda-monitoring/update-serverless-monitoring-aws-lambda.mdx index f3656f9d919..634e5b2d8a6 100644 --- a/src/content/docs/serverless-function-monitoring/aws-lambda-monitoring/enable-lambda-monitoring/update-serverless-monitoring-aws-lambda.mdx +++ b/src/content/docs/serverless-function-monitoring/aws-lambda-monitoring/enable-lambda-monitoring/update-serverless-monitoring-aws-lambda.mdx @@ -83,7 +83,7 @@ If you [manually installed the ingest function from the AWS Serverless Applicati ## Enabling log management -If you currently don't have New Relic's log management enabled, but would like to: +If you currently don't have New Relic's enabled, but would like to: 1. Make sure you have the latest version of the CLI: diff --git a/src/content/docs/style-guide/capitalization/product-capability-feature-usage.mdx b/src/content/docs/style-guide/capitalization/product-capability-feature-usage.mdx index 3989eb36a36..b046a5ad14a 100644 --- a/src/content/docs/style-guide/capitalization/product-capability-feature-usage.mdx +++ b/src/content/docs/style-guide/capitalization/product-capability-feature-usage.mdx @@ -463,7 +463,7 @@ Do not capitalize our capability and feature names (what you get with our platfo Feature and capability defined: * A feature is an individual experience or element of functionality in the New Relic platform or a New Relic capability. -* A capability is a collection of features that enable a customer to achieve a use case. A capability is considered a superset of features and often tends to be an outside-in term that customers associate with an existing category such as application performance monitoring, applied intelligence, infrastructure monitoring, and log management. In other words, capabilities are the things we'd treat as SKUs if we sold them all separately. +* A capability is a collection of features that enable a customer to achieve a use case. A capability is considered a superset of features and often tends to be an outside-in term that customers associate with an existing category such as application performance monitoring, applied intelligence, infrastructure monitoring, and . In other words, capabilities are the things we'd treat as SKUs if we sold them all separately. Notes about features and capabilities: diff --git a/src/content/docs/style-guide/writing-strategies/voice-strategies-docs-sound-new-relic.mdx b/src/content/docs/style-guide/writing-strategies/voice-strategies-docs-sound-new-relic.mdx index d0c87e151dd..083abe0d923 100644 --- a/src/content/docs/style-guide/writing-strategies/voice-strategies-docs-sound-new-relic.mdx +++ b/src/content/docs/style-guide/writing-strategies/voice-strategies-docs-sound-new-relic.mdx @@ -227,9 +227,9 @@ Consider reading aloud to hear how your content sounds. Is it natural? Are the s diff --git a/src/content/docs/tutorial-manage-large-log-volume/get-started-managing-large-logs.mdx b/src/content/docs/tutorial-manage-large-log-volume/get-started-managing-large-logs.mdx index 242f2074cfe..6d06e929e56 100644 --- a/src/content/docs/tutorial-manage-large-log-volume/get-started-managing-large-logs.mdx +++ b/src/content/docs/tutorial-manage-large-log-volume/get-started-managing-large-logs.mdx @@ -29,7 +29,7 @@ Whether you're setting up a log management platform for the first time or you're Once you've identified you have a problem with managing logs, it's time to choose a log management platform. There are many platforms out there. Some focus on quick automation but sacrifice ease-of-use. Others focus on complex features, but obscure their pricing. -New Relic's philosphy when it comes to log management focuses on three things: we want our logs solution to be **flexible, transparent, and usage-based**. Let's quickly talk about what these mean: +New Relic's philosphy when it comes to focuses on three things: we want our logs solution to be **flexible, transparent, and usage-based**. Let's quickly talk about what these mean: * **Flexible**: Everyone needs different things from their logs. Some may need to ingest a large amount for record keeping while some may need to ingest a small amount. Some may need to heavily parse their logs while other may barely parse their logs at all. Our log management platform gives you tools to manage what you send us. * **Transparent**: There are no surprises in billing. New Relic charges you only for the data you ingest at a fixed price per gigabyte. diff --git a/src/content/docs/tutorial-manage-large-log-volume/organize-large-logs.mdx b/src/content/docs/tutorial-manage-large-log-volume/organize-large-logs.mdx index 40962fa2bb3..4f594e107a7 100644 --- a/src/content/docs/tutorial-manage-large-log-volume/organize-large-logs.mdx +++ b/src/content/docs/tutorial-manage-large-log-volume/organize-large-logs.mdx @@ -187,7 +187,7 @@ For a more in-depth look at creating Grok patterns to parse logs, [read our blog ## What's next -Congratulations on uncovering the true value of your logs and saving your team hours of frustration with your logs! As your system grows and you ingest, you'll need to ensure an upkeep of parsing rules and partitions. If you're interested in diving deeper on what New Relic log management can do for you, check out these docs: +Congratulations on uncovering the true value of your logs and saving your team hours of frustration with your logs! As your system grows and you ingest, you'll need to ensure an upkeep of parsing rules and partitions. If you're interested in diving deeper on what New Relic can do for you, check out these docs: * [Parsing log data](/docs/logs/ui-data/parsing): A deeper look into parsing logs with Grok and learn how to create, query, and manage your log parsing rules by using NerdGraph, our GraphQL API. * [Log patterns](/docs/logs/ui-data/find-unusual-logs-log-patterns/): Log patterns are the fastest way to discover value in log data without searching. diff --git a/src/content/docs/tutorial-optimize-telemetry/data-optimize-techniques.mdx b/src/content/docs/tutorial-optimize-telemetry/data-optimize-techniques.mdx index f28fb545e9f..a6768d302d1 100644 --- a/src/content/docs/tutorial-optimize-telemetry/data-optimize-techniques.mdx +++ b/src/content/docs/tutorial-optimize-telemetry/data-optimize-techniques.mdx @@ -906,7 +906,7 @@ This field works in a way similar to `grep -E` in Unix systems. For example, for If you have pre-written Fluentd configurations for Fluentbit that do valuable filtering or parsing, you can import them into our logging configuration. To do this, use the `config_file` and `parsers` parameters in any `.yaml` file in your `logging.d` folder: -* `config_file`: path to an existing Fluent Bit configuration file. Any overlapping source results in duplicate messages in New Relic's log management. +* `config_file`: path to an existing Fluent Bit configuration file. Any overlapping source results in duplicate messages in New Relic's . * `parsers_file`: path to an existing Fluent Bit parsers file. The following parser names are reserved: `rfc3164`, `rfc3164-local` and `rfc5424`.
- New Relic offers a fast, scalable log management platform that allows you to connect your logs with the rest of your telemetry and infrastructure data. + New Relic offers a fast, scalable platform that allows you to connect your logs with the rest of your telemetry and infrastructure data. - New Relic’s Telemetry Data Platform provides the world’s most powerful managed, open and unified platform for collecting, exploring, and alerting on your metrics, events, logs, and traces. + New Relic's Telemetry Data Platform provides the world's most powerful managed, open and unified platform for collecting, exploring, and alerting on your metrics, events, logs, and traces.