Skip to content

Commit

Permalink
Update titles of pages within the "Manage" section
Browse files Browse the repository at this point in the history
Signed-off-by: Christian Haudum <[email protected]>
  • Loading branch information
chaudum committed Jan 17, 2025
1 parent 905caa1 commit cc33f9a
Show file tree
Hide file tree
Showing 19 changed files with 75 additions and 82 deletions.
8 changes: 4 additions & 4 deletions docs/sources/operations/authentication.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Authentication
menuTitle:
description: Describes Loki authentication.
title: Manage authentication
menuTitle: Authentication
description: Describes how to add authentication to Grafana Loki.
weight:
---
# Authentication
# Manage authentication

Grafana Loki does not come with any included authentication layer. Operators are
expected to run an authenticating reverse proxy in front of your services.
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/automatic-stream-sharding.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
---
title: Automatic stream sharding
title: Manage large volume log streams with automatic stream sharding
menuTitle: Automatic stream sharding
description: Describes how to control issues around the per-stream rate limit using automatic stream sharding.
weight:
---

# Automatic stream sharding
# Manage large volume streams with automatic stream sharding

Automatic stream sharding will attempt to keep streams under a `desired_rate` by adding new labels and values to
existing streams. When properly tuned, this should eliminate issues where log producers are rate limited due to the
Automatic stream sharding can keep streams under a `desired_rate` by adding new labels and values to
existing streams. When properly tuned, this can eliminate issues where log producers are rate limited due to the
per-stream rate limit.

**To enable automatic stream sharding:**
Expand Down
5 changes: 2 additions & 3 deletions docs/sources/operations/autoscaling_queriers.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
---
title: Autoscaling Loki queriers
title: Manage varying workloads at scale with autoscaling queriers
menuTitle: Autoscaling queriers
description: Describes how to use KEDA to autoscale the quantity of queriers for a microsevices mode Kubernetes deployment.
weight:
---

# Autoscaling Loki queriers
# Manage varying workloads at scale with autoscaling queriers

A microservices deployment of a Loki cluster that runs on Kubernetes typically handles a
workload that varies throughout the day.
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/blocking-queries.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Blocking Queries
menuTitle:
description: Describes how to configure Loki to block expensive queries using per-tenant overrides.
title: Handle unwanted queries
menuTitle: Unwanted queries
description: Describes how to configure Grafana Loki to block unwanted or expensive queries using per-tenant overrides.
weight:
---
# Blocking Queries
# Handle abusive queries

In certain situations, you may not be able to control the queries being sent to your Loki installation. These queries
may be intentionally or unintentionally expensive to run, and they may affect the overall stability or cost of running
Expand Down
7 changes: 3 additions & 4 deletions docs/sources/operations/bloom-filters.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Bloom filters (Experimental)
title: Manage bloom filter building and querying (Experimental)
menuTitle: Bloom filters
description: Describes how to enable and configure query acceleration with bloom filters.
weight:
Expand All @@ -9,13 +9,12 @@ keywords:
aliases:
- ./query-acceleration-blooms
---

# Bloom filters (Experimental)
# Manage bloom filter building and querying (Experimental)

{{< admonition type="warning" >}}
In Loki and Grafana Enterprise Logs (GEL), Query acceleration using blooms is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available. No SLA is provided. Note that this feature is intended for users who are ingesting more than 75TB of logs a month, as it is designed to accelerate queries against large volumes of logs.

In Grafana Cloud, Query acceleration using Bloom filters is enabled as a [public preview](/docs/release-life-cycle/) for select large-scale customers that are ingesting more that 75TB of logs a month. Limited support and no SLA are provided.
In Grafana Cloud, Query acceleration using bloom filters is enabled as a [public preview](/docs/release-life-cycle/) for select large-scale customers that are ingesting more that 75TB of logs a month. Limited support and no SLA are provided.
{{< /admonition >}}

Loki leverages [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) to speed up queries by reducing the amount of data Loki needs to load from the store and iterate through.
Expand Down
7 changes: 3 additions & 4 deletions docs/sources/operations/caching.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
---
title: Caching
title: Configure caches to speed up queries
menuTitle: Caching
description: Describes how to enable and configure memcached to speed query performance.
description: Describes how to enable and configure memcached to improve query performance.
weight:
keywords:
- memcached
- caching
---

# Caching
# Configure caches to speed up queries

Loki supports caching of index writes and lookups, chunks and query results to
speed up query performance. This sections describes the recommended Memcached
Expand Down
6 changes: 3 additions & 3 deletions docs/sources/operations/loki-canary/_index.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Loki Canary
menuTitle:
title: Audit data propagation latency and correctness using Loki Canary
menuTitle: Loki Canary
description: Describes how to use Loki Canary to audit the log-capturing performance of a Grafana Loki cluster to ensure Loki is ingesting logs without data loss.
weight:
---
# Loki Canary
# Audit data propagation latency and correctness using Loki Canary

Loki Canary is a standalone app that audits the log-capturing performance of a Grafana Loki cluster.
This component emits and periodically queries for logs, making sure that Loki is ingesting logs without any data loss.
Expand Down
6 changes: 3 additions & 3 deletions docs/sources/operations/meta-monitoring/_index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: Monitor Loki
title: Collect metrics and logs of your Loki cluster
menuTitle: Monitor Loki
description: Describes the various options for monitoring your Loki environment, and the metrics available.
aliases:
- ../operations/observability
---

# Monitor Loki
# Collect metrics and logs of your Loki cluster

As part of your Loki implementation, you will also want to monitor your Loki cluster.

Expand Down
9 changes: 4 additions & 5 deletions docs/sources/operations/meta-monitoring/mixins.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
---
title: Install Loki mixins
menuTitle: Install mixins
description: Describes the Loki mixins, how to configure and install the dashboards, alerts, and recording rules.
title: Install dashboards, alerts, and recording rules
menuTitle: Mixins
description: Describes the Loki mixins, how to configure and install the dashboards, alerts, and recording rules.
weight: 100
---

# Install Loki mixins
# Install dashboards, alerts, and recording rules

Loki is instrumented to expose metrics about itself via the `/metrics` endpoint, designed to be scraped by Prometheus. Each Loki release includes a mixin. The Loki mixin provides a set of Grafana dashboards, Prometheus recording rules and alerts for monitoring Loki.

Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/multi-tenancy.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Multi-tenancy
menuTitle:
description: Describes how Loki implements multi-tenancy to isolate tenant data and queries.
title: Manage tenant isolation
menuTitle: Multi-tenancy
description: Describes how Grafana Loki implements multi-tenancy to isolate tenant data and queries.
weight:
---
# Multi-tenancy
# Manage tenant isolation

Grafana Loki is a multi-tenant system; requests and data for tenant A are isolated from
tenant B. Requests to the Loki API should include an HTTP header
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/overrides-exporter.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: Overrides exporter
menuTitle:
description: Describes how the Overrides Exporter module exposes tenant limits as Prometheus metrics.
title: Monitor tenant limits using the Overrides Exporter
menuTitle: Overrides Exporter
description: Describes how the Overrides Exporter exposes tenant limits as Prometheus metrics.
weight:
---

# Overrides exporter
# Monitor tenant limits using the Overrides Exporter

Loki is a multi-tenant system that supports applying limits to each tenant as a mechanism for resource management. The `overrides-exporter` module exposes these limits as Prometheus metrics in order to help operators better understand tenant behavior.

Expand Down
5 changes: 2 additions & 3 deletions docs/sources/operations/query-fairness/_index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
---
title: Query fairness within tenants
title: Ensure query fairness within tenants using actors
menuTitle: Query fairness
description: Describes methods for guaranteeing query fairness across multiple actors within a single tenant using the scheduler.
weight:
---

# Query fairness within tenants
# Ensure query fairness within tenants using actors

Loki uses [shuffle sharding]({{< relref "../shuffle-sharding/_index.md" >}})
to minimize impact across tenants in case of querier failures or misbehaving
Expand Down
9 changes: 5 additions & 4 deletions docs/sources/operations/recording-rules.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
---
title: Recording Rules
menuTitle:
description: Working with recording rules.
title: Manage recording rules
menuTitle: Recording rules
description: Describes how to setup and use recording rules in Grafana Loki.
weight:
---
# Manage recording rules

# Recording Rules
Recording rules are queries that run in an interval and produce metrics from logs that can be pushed to a Prometheus compatible backend.

Recording rules are evaluated by the `ruler` component. Each `ruler` acts as its own `querier`, in the sense that it
executes queries against the store without using the `query-frontend` or `querier` components. It will respect all query
Expand Down
19 changes: 9 additions & 10 deletions docs/sources/operations/request-validation-rate-limits.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,16 @@
---
title: Request Validation and Rate-Limit Errors
menuTitle:
description: Request Validation and Rate-Limit Errors
title: Enforce rate limits and push request validation
menuTitle: Rate limits
description: Decribes the different rate limits and push request validation and their error handling.
weight:
---
# Enforce rate limits and push request validation

# Request Validation and Rate-Limit Errors

Loki will reject requests if they exceed a usage threshold (rate-limit error) or if they are invalid (validation error).
Loki will reject requests if they exceed a usage threshold (rate limit error) or if they are invalid (validation error).

All occurrences of these errors can be observed using the `loki_discarded_samples_total` and `loki_discarded_bytes_total` metrics. The sections below describe the various possible reasons specified in the `reason` label of these metrics.

It is recommended that Loki operators set up alerts or dashboards with these metrics to detect when rate-limits or validation errors occur.
It is recommended that Loki operators set up alerts or dashboards with these metrics to detect when rate limits or validation errors occur.


### Terminology
Expand All @@ -26,7 +25,7 @@ Rate-limits are enforced when Loki cannot handle more requests from a tenant.

### `rate_limited`

This rate-limit is enforced when a tenant has exceeded their configured log ingestion rate-limit.
This rate limit is enforced when a tenant has exceeded their configured log ingestion rate limit.

One solution if you're seeing samples dropped due to `rate_limited` is simply to increase the rate limits on your Loki cluster. These limits can be modified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file) file. The config options to use are `ingestion_rate_mb` and `ingestion_burst_size_mb`.

Expand All @@ -46,9 +45,9 @@ Note that you'll want to make sure your Loki cluster has sufficient resources pr

### `per_stream_rate_limit`

This limit is enforced when a single stream reaches its rate-limit.
This limit is enforced when a single stream reaches its rate limit.

Each stream has a rate-limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value).
Each stream has a rate limit applied to it to prevent individual streams from overwhelming the set of ingesters it is distributed to (the size of that set is equal to the `replication_factor` value).

This value can be modified globally in the [`limits_config`](/docs/loki/<LOKI_VERSION>/configuration/#limits_config) block, or on a per-tenant basis in the [runtime overrides](/docs/loki/<LOKI_VERSION>/configuration/#runtime-configuration-file) file. The config options to adjust are `per_stream_rate_limit` and `per_stream_rate_limit_burst`.

Expand Down
12 changes: 6 additions & 6 deletions docs/sources/operations/scalability.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
---
title: Scale Loki
menuTitle: Scale
description: Describes how to scale Grafana Loki
title: Manage larger production deployments
menuTitle: Scale Loki
description: Describes strategies how to scale a Loki deployment when log volume increases.
weight:
---
# Scale Loki
# Manage larger production deployments

When scaling Loki, operators should consider running several Loki processes
partitioned by role (ingester, distributor, querier) rather than a single Loki
When needing to scale Loki due to increased log volume, operators should consider running several Loki processes
partitioned by role (ingester, distributor, querier, ...) rather than a single Loki
process. Grafana Labs' [production setup](https://github.com/grafana/loki/blob/main/production/ksonnet/loki)
contains `.libsonnet` files that demonstrates configuring separate components
and scaling for resource usage.
Expand Down
5 changes: 2 additions & 3 deletions docs/sources/operations/shuffle-sharding/_index.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
---
title: Shuffle sharding
title: Isolate tenant workflows using shuffle sharding
menuTitle: Shuffle sharding
description: Describes how to isolate tenant workloads from other tenant workloads using shuffle sharding to provide a better sharing of resources.
weight:
---

# Shuffle sharding
# Isolate tenant workflows using shuffle sharding

Shuffle sharding is a resource-management technique used to isolate tenant workloads from other tenant workloads, to give each tenant more of a single-tenant experience when running in a shared cluster.
This technique is explained by AWS in their article [Workload isolation using shuffle-sharding](https://aws.amazon.com/builders-library/workload-isolation-using-shuffle-sharding/).
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Troubleshooting Loki
menuTitle: Troubleshooting
description: Describes how to troubleshoot Grafana Loki.
title: Manage and debug errors
menuTitle: Troubleshooting
description: Describes how to troubleshoot and debug specific errors in Grafana Loki.
weight:
aliases:
- /docs/loki/latest/getting-started/troubleshooting/
---
# Troubleshooting Loki
# Manage and debug errors

## "Loki: Bad Gateway. 502"

Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/upgrade.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Upgrade
description: Links to Loki upgrade documentation.
title: Manage version upgrades
menuTitle: Upgrade
description: Links to Grafana Loki upgrade documentation.
weight:
---

# Upgrade
# Manage version upgrades

- [Upgrade](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/) from one Loki version to a newer version.

Expand Down
11 changes: 5 additions & 6 deletions docs/sources/operations/zone-ingesters.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
---
title: Zone aware ingesters
menuTitle:
description: Describes how to migrate from a single ingester StatefulSet to three zone aware ingester StatefulSets
title: Speed up ingester rollout using zone awareness
menuTitle: Zone aware ingesters
description: Describes how to migrate from a single ingester StatefulSet to three zone aware ingester StatefulSets.
weight:
---

# Zone aware ingesters
# Speed up ingester rollout using zone awareness

The Loki zone aware ingesters are used by Grafana Labs in order to allow for easier rollouts of large Loki deployments. You can think of them as three logical zones, however with some extra Kubernetes configuration you could deploy them in separate zones.

Expand Down Expand Up @@ -111,4 +110,4 @@ These instructions assume you are using the zone aware ingester jsonnet deployme
1. clean up any remaining temporary config from the migration, for example `multi_zone_ingester_migration_enabled: true` is no longer needed.
1. ensure that all the old default ingester PVC/PV are removed.
1. ensure that all the old default ingester PVC/PV are removed.

0 comments on commit cc33f9a

Please sign in to comment.