diff --git a/docs/sources/operations/authentication.md b/docs/sources/operations/authentication.md index 44ef07c9bc391..22daf7dbf3a76 100644 --- a/docs/sources/operations/authentication.md +++ b/docs/sources/operations/authentication.md @@ -1,10 +1,10 @@ --- -title: Authentication -menuTitle: -description: Describes Loki authentication. +title: Manage authentication +menuTitle: Authentication +description: Describes how to add authentication to Grafana Loki. weight: --- -# Authentication +# Manage authentication Grafana Loki does not come with any included authentication layer. Operators are expected to run an authenticating reverse proxy in front of your services. diff --git a/docs/sources/operations/automatic-stream-sharding.md b/docs/sources/operations/automatic-stream-sharding.md index 2e876c89197f7..e2832650a3db0 100644 --- a/docs/sources/operations/automatic-stream-sharding.md +++ b/docs/sources/operations/automatic-stream-sharding.md @@ -1,14 +1,14 @@ --- -title: Automatic stream sharding +title: Manage large volume log streams with automatic stream sharding menuTitle: Automatic stream sharding description: Describes how to control issues around the per-stream rate limit using automatic stream sharding. weight: --- -# Automatic stream sharding +# Manage large volume log streams with automatic stream sharding -Automatic stream sharding will attempt to keep streams under a `desired_rate` by adding new labels and values to -existing streams. When properly tuned, this should eliminate issues where log producers are rate limited due to the +Automatic stream sharding can keep streams under a `desired_rate` by adding new labels and values to +existing streams. When properly tuned, this can eliminate issues where log producers are rate limited due to the per-stream rate limit. **To enable automatic stream sharding:** diff --git a/docs/sources/operations/autoscaling_queriers.md b/docs/sources/operations/autoscaling_queriers.md index 908a625016807..86deaa2caa05a 100644 --- a/docs/sources/operations/autoscaling_queriers.md +++ b/docs/sources/operations/autoscaling_queriers.md @@ -1,11 +1,10 @@ --- -title: Autoscaling Loki queriers +title: Manage varying workloads at scale with autoscaling queriers menuTitle: Autoscaling queriers description: Describes how to use KEDA to autoscale the quantity of queriers for a microsevices mode Kubernetes deployment. weight: --- - -# Autoscaling Loki queriers +# Manage varying workloads at scale with autoscaling queriers A microservices deployment of a Loki cluster that runs on Kubernetes typically handles a workload that varies throughout the day. diff --git a/docs/sources/operations/blocking-queries.md b/docs/sources/operations/blocking-queries.md index ac286bbb3f849..1af8bec04beba 100644 --- a/docs/sources/operations/blocking-queries.md +++ b/docs/sources/operations/blocking-queries.md @@ -1,10 +1,10 @@ --- -title: Blocking Queries -menuTitle: -description: Describes how to configure Loki to block expensive queries using per-tenant overrides. +title: Block unwanted queries +menuTitle: Unwanted queries +description: Describes how to configure Grafana Loki to block unwanted or expensive queries using per-tenant overrides. weight: --- -# Blocking Queries +# Block unwanted queries In certain situations, you may not be able to control the queries being sent to your Loki installation. These queries may be intentionally or unintentionally expensive to run, and they may affect the overall stability or cost of running diff --git a/docs/sources/operations/bloom-filters.md b/docs/sources/operations/bloom-filters.md index e1a09cdebcb07..c1030f63fe1e3 100644 --- a/docs/sources/operations/bloom-filters.md +++ b/docs/sources/operations/bloom-filters.md @@ -1,5 +1,5 @@ --- -title: Bloom filters (Experimental) +title: Manage bloom filter building and querying (Experimental) menuTitle: Bloom filters description: Describes how to enable and configure query acceleration with bloom filters. weight: @@ -9,13 +9,12 @@ keywords: aliases: - ./query-acceleration-blooms --- - -# Bloom filters (Experimental) +# Manage bloom filter building and querying (Experimental) {{< admonition type="warning" >}} In Loki and Grafana Enterprise Logs (GEL), Query acceleration using blooms is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available. No SLA is provided. Note that this feature is intended for users who are ingesting more than 75TB of logs a month, as it is designed to accelerate queries against large volumes of logs. -In Grafana Cloud, Query acceleration using Bloom filters is enabled as a [public preview](/docs/release-life-cycle/) for select large-scale customers that are ingesting more that 75TB of logs a month. Limited support and no SLA are provided. +In Grafana Cloud, Query acceleration using bloom filters is enabled as a [public preview](/docs/release-life-cycle/) for select large-scale customers that are ingesting more that 75TB of logs a month. Limited support and no SLA are provided. {{< /admonition >}} Loki leverages [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) to speed up queries by reducing the amount of data Loki needs to load from the store and iterate through. diff --git a/docs/sources/operations/caching.md b/docs/sources/operations/caching.md index 53a132db58352..e401eb020e4c8 100644 --- a/docs/sources/operations/caching.md +++ b/docs/sources/operations/caching.md @@ -1,14 +1,13 @@ --- -title: Caching +title: Configure caches to speed up queries menuTitle: Caching -description: Describes how to enable and configure memcached to speed query performance. +description: Describes how to enable and configure memcached to improve query performance. weight: keywords: - memcached - caching --- - -# Caching +# Configure caches to speed up queries Loki supports caching of index writes and lookups, chunks and query results to speed up query performance. This sections describes the recommended Memcached diff --git a/docs/sources/operations/loki-canary/_index.md b/docs/sources/operations/loki-canary/_index.md index f6c1bf23a9388..6fb18529357c6 100644 --- a/docs/sources/operations/loki-canary/_index.md +++ b/docs/sources/operations/loki-canary/_index.md @@ -1,10 +1,10 @@ --- -title: Loki Canary -menuTitle: +title: Audit data propagation latency and correctness using Loki Canary +menuTitle: Loki Canary description: Describes how to use Loki Canary to audit the log-capturing performance of a Grafana Loki cluster to ensure Loki is ingesting logs without data loss. weight: --- -# Loki Canary +# Audit data propagation latency and correctness using Loki Canary Loki Canary is a standalone app that audits the log-capturing performance of a Grafana Loki cluster. This component emits and periodically queries for logs, making sure that Loki is ingesting logs without any data loss. diff --git a/docs/sources/operations/meta-monitoring/_index.md b/docs/sources/operations/meta-monitoring/_index.md index 7b90955ef2ad4..12906a926b15e 100644 --- a/docs/sources/operations/meta-monitoring/_index.md +++ b/docs/sources/operations/meta-monitoring/_index.md @@ -1,11 +1,11 @@ --- -title: Monitor Loki +title: Collect metrics and logs of your Loki cluster +menuTitle: Monitor Loki description: Describes the various options for monitoring your Loki environment, and the metrics available. aliases: - ../operations/observability --- - -# Monitor Loki +# Collect metrics and logs of your Loki cluster As part of your Loki implementation, you will also want to monitor your Loki cluster. diff --git a/docs/sources/operations/meta-monitoring/mixins.md b/docs/sources/operations/meta-monitoring/mixins.md index a4a819c4e3d28..d95cae5861497 100644 --- a/docs/sources/operations/meta-monitoring/mixins.md +++ b/docs/sources/operations/meta-monitoring/mixins.md @@ -1,11 +1,10 @@ --- -title: Install Loki mixins -menuTitle: Install mixins -description: Describes the Loki mixins, how to configure and install the dashboards, alerts, and recording rules. +title: Install dashboards, alerts, and recording rules +menuTitle: Mixins +description: Describes the Loki mixins, how to configure and install the dashboards, alerts, and recording rules. weight: 100 --- - -# Install Loki mixins +# Install dashboards, alerts, and recording rules Loki is instrumented to expose metrics about itself via the `/metrics` endpoint, designed to be scraped by Prometheus. Each Loki release includes a mixin. The Loki mixin provides a set of Grafana dashboards, Prometheus recording rules and alerts for monitoring Loki. diff --git a/docs/sources/operations/multi-tenancy.md b/docs/sources/operations/multi-tenancy.md index bcd61ded9237c..97594795446be 100644 --- a/docs/sources/operations/multi-tenancy.md +++ b/docs/sources/operations/multi-tenancy.md @@ -1,10 +1,10 @@ --- -title: Multi-tenancy -menuTitle: -description: Describes how Loki implements multi-tenancy to isolate tenant data and queries. +title: Manage tenant isolation +menuTitle: Multi-tenancy +description: Describes how Grafana Loki implements multi-tenancy to isolate tenant data and queries. weight: --- -# Multi-tenancy +# Manage tenant isolation Grafana Loki is a multi-tenant system; requests and data for tenant A are isolated from tenant B. Requests to the Loki API should include an HTTP header diff --git a/docs/sources/operations/overrides-exporter.md b/docs/sources/operations/overrides-exporter.md index ef645ca28efde..a467bff64a91e 100644 --- a/docs/sources/operations/overrides-exporter.md +++ b/docs/sources/operations/overrides-exporter.md @@ -1,11 +1,11 @@ --- -title: Overrides exporter -menuTitle: -description: Describes how the Overrides Exporter module exposes tenant limits as Prometheus metrics. +title: Monitor tenant limits using the Overrides Exporter +menuTitle: Overrides Exporter +description: Describes how the Overrides Exporter exposes tenant limits as Prometheus metrics. weight: --- -# Overrides exporter +# Monitor tenant limits using the Overrides Exporter Loki is a multi-tenant system that supports applying limits to each tenant as a mechanism for resource management. The `overrides-exporter` module exposes these limits as Prometheus metrics in order to help operators better understand tenant behavior. diff --git a/docs/sources/operations/query-fairness/_index.md b/docs/sources/operations/query-fairness/_index.md index 79c569d5de723..655802629b82c 100644 --- a/docs/sources/operations/query-fairness/_index.md +++ b/docs/sources/operations/query-fairness/_index.md @@ -1,11 +1,10 @@ --- -title: Query fairness within tenants +title: Ensure query fairness within tenants using actors menuTitle: Query fairness description: Describes methods for guaranteeing query fairness across multiple actors within a single tenant using the scheduler. weight: --- - -# Query fairness within tenants +# Ensure query fairness within tenants using actors Loki uses [shuffle sharding]({{< relref "../shuffle-sharding/_index.md" >}}) to minimize impact across tenants in case of querier failures or misbehaving diff --git a/docs/sources/operations/recording-rules.md b/docs/sources/operations/recording-rules.md index 8c335740d5af6..b6c18ee1e09ca 100644 --- a/docs/sources/operations/recording-rules.md +++ b/docs/sources/operations/recording-rules.md @@ -1,11 +1,12 @@ --- -title: Recording Rules -menuTitle: -description: Working with recording rules. +title: Manage recording rules +menuTitle: Recording rules +description: Describes how to setup and use recording rules in Grafana Loki. weight: --- +# Manage recording rules -# Recording Rules +Recording rules are queries that run in an interval and produce metrics from logs that can be pushed to a Prometheus compatible backend. Recording rules are evaluated by the `ruler` component. Each `ruler` acts as its own `querier`, in the sense that it executes queries against the store without using the `query-frontend` or `querier` components. It will respect all query diff --git a/docs/sources/operations/request-validation-rate-limits.md b/docs/sources/operations/request-validation-rate-limits.md index cb602c17c2292..6f631a25c5f57 100644 --- a/docs/sources/operations/request-validation-rate-limits.md +++ b/docs/sources/operations/request-validation-rate-limits.md @@ -1,17 +1,16 @@ --- -title: Request Validation and Rate-Limit Errors -menuTitle: -description: Request Validation and Rate-Limit Errors +title: Enforce rate limits and push request validation +menuTitle: Rate limits +description: Decribes the different rate limits and push request validation and their error handling. weight: --- +# Enforce rate limits and push request validation -# Request Validation and Rate-Limit Errors - -Loki will reject requests if they exceed a usage threshold (rate-limit error) or if they are invalid (validation error). +Loki will reject requests if they exceed a usage threshold (rate limit error) or if they are invalid (validation error). All occurrences of these errors can be observed using the `loki_discarded_samples_total` and `loki_discarded_bytes_total` metrics. The sections below describe the various possible reasons specified in the `reason` label of these metrics. -It is recommended that Loki operators set up alerts or dashboards with these metrics to detect when rate-limits or validation errors occur. +It is recommended that Loki operators set up alerts or dashboards with these metrics to detect when rate limits or validation errors occur. ### Terminology @@ -26,7 +25,7 @@ Rate-limits are enforced when Loki cannot handle more requests from a tenant. ### `rate_limited` -This rate-limit is enforced when a tenant has exceeded their configured log ingestion rate-limit. +This rate limit is enforced when a tenant has exceeded their configured log ingestion rate limit. One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki//configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki//configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`. @@ -46,9 +45,9 @@ Note that you'll want to make sure your Loki cluster has sufficient resources pr ### `per_stream_rate_limit` -This limit is enforced when a single stream reaches its rate-limit. +This limit is enforced when a single stream reaches its rate limit. -Each stream has a rate-limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value). +Each stream has a rate limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value). This value can be modified globally in the [`limits_config`](/docs/loki//configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki//configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`. diff --git a/docs/sources/operations/scalability.md b/docs/sources/operations/scalability.md index 1cc6e87d12640..6916b60cd12f7 100644 --- a/docs/sources/operations/scalability.md +++ b/docs/sources/operations/scalability.md @@ -1,13 +1,13 @@ --- -title: Scale Loki -menuTitle: Scale -description: Describes how to scale Grafana Loki +title: Manage larger production deployments +menuTitle: Scale Loki +description: Describes strategies how to scale a Loki deployment when log volume increases. weight: --- -# Scale Loki +# Manage larger production deployments -When scaling Loki, operators should consider running several Loki processes -partitioned by role (ingester, distributor, querier) rather than a single Loki +When needing to scale Loki due to increased log volume, operators should consider running several Loki processes +partitioned by role (ingester, distributor, querier, and so on) rather than a single Loki process. Grafana Labs' [production setup](https://github.com/grafana/loki/blob/main/production/ksonnet/loki) contains `.libsonnet` files that demonstrates configuring separate components and scaling for resource usage. diff --git a/docs/sources/operations/shuffle-sharding/_index.md b/docs/sources/operations/shuffle-sharding/_index.md index 166fd1992d583..aa658b69d5599 100644 --- a/docs/sources/operations/shuffle-sharding/_index.md +++ b/docs/sources/operations/shuffle-sharding/_index.md @@ -1,11 +1,10 @@ --- -title: Shuffle sharding +title: Isolate tenant workflows using shuffle sharding menuTitle: Shuffle sharding description: Describes how to isolate tenant workloads from other tenant workloads using shuffle sharding to provide a better sharing of resources. weight: --- - -# Shuffle sharding +# Isolate tenant workflows using shuffle sharding Shuffle sharding is a resource-management technique used to isolate tenant workloads from other tenant workloads, to give each tenant more of a single-tenant experience when running in a shared cluster. This technique is explained by AWS in their article [Workload isolation using shuffle-sharding](https://aws.amazon.com/builders-library/workload-isolation-using-shuffle-sharding/). diff --git a/docs/sources/operations/troubleshooting.md b/docs/sources/operations/troubleshooting.md index d99436e181a00..04d5be0712d82 100644 --- a/docs/sources/operations/troubleshooting.md +++ b/docs/sources/operations/troubleshooting.md @@ -1,12 +1,12 @@ --- -title: Troubleshooting Loki -menuTitle: Troubleshooting -description: Describes how to troubleshoot Grafana Loki. +title: Manage and debug errors +menuTitle: Troubleshooting +description: Describes how to troubleshoot and debug specific errors in Grafana Loki. weight: aliases: - /docs/loki/latest/getting-started/troubleshooting/ --- -# Troubleshooting Loki +# Manage and debug errors ## "Loki: Bad Gateway. 502" diff --git a/docs/sources/operations/upgrade.md b/docs/sources/operations/upgrade.md index 8b47232dff5bb..22c5cceeaf9f5 100644 --- a/docs/sources/operations/upgrade.md +++ b/docs/sources/operations/upgrade.md @@ -1,10 +1,10 @@ --- -title: Upgrade -description: Links to Loki upgrade documentation. +title: Manage version upgrades +menuTitle: Upgrade +description: Links to Grafana Loki upgrade documentation. weight: --- - -# Upgrade +# Manage version upgrades - [Upgrade](https://grafana.com/docs/loki//setup/upgrade/) from one Loki version to a newer version. diff --git a/docs/sources/operations/zone-ingesters.md b/docs/sources/operations/zone-ingesters.md index 7467f16ca09f3..51913da5e3b9e 100644 --- a/docs/sources/operations/zone-ingesters.md +++ b/docs/sources/operations/zone-ingesters.md @@ -1,11 +1,10 @@ --- -title: Zone aware ingesters -menuTitle: -description: Describes how to migrate from a single ingester StatefulSet to three zone aware ingester StatefulSets +title: Speed up ingester rollout using zone awareness +menuTitle: Zone aware ingesters +description: Describes how to migrate from a single ingester StatefulSet to three zone aware ingester StatefulSets. weight: --- - -# Zone aware ingesters +# Speed up ingester rollout using zone awareness The Loki zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones. @@ -111,4 +110,4 @@ These instructions assume you are using the zone aware ingester jsonnet deployme 1. clean up any remaining temporary config from the migration, for example `multi_zone_ingester_migration_enabled: true` is no longer needed. -1. ensure that all the old default ingester PVC/PV are removed. \ No newline at end of file +1. ensure that all the old default ingester PVC/PV are removed.