From afbc9508f3f41823d4eccaf22e45bf5eea682ef2 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Thu, 13 Feb 2025 09:01:59 -0600 Subject: [PATCH] replace api links (#434) --- .../cloud-on-k8s/common-problems.md | 2 +- troubleshoot/elasticsearch/add-repository.md | 2 +- troubleshoot/elasticsearch/add-tier.md | 4 +-- .../allow-all-cluster-allocation.md | 12 ++++---- .../allow-all-index-allocation.md | 8 +++--- .../elasticsearch/circuit-breaker-errors.md | 6 ++-- .../decrease-disk-usage-data-node.md | 2 +- .../diagnose-unassigned-shards.md | 16 +++++------ .../diagnosing-unknown-repositories.md | 2 +- troubleshoot/elasticsearch/diagnostic.md | 2 +- .../discovery-troubleshooting.md | 10 +++---- .../elasticsearch/elasticsearch-reference.md | 2 +- .../index-lifecycle-management-errors.md | 12 ++++---- .../remote-clusters.md | 14 +++++----- .../fix-master-node-out-of-disk.md | 2 +- .../fix-other-node-out-of-disk.md | 2 +- .../elasticsearch/fix-watermark-errors.md | 8 +++--- troubleshoot/elasticsearch/high-cpu-usage.md | 8 +++--- .../elasticsearch/high-jvm-memory-pressure.md | 6 ++-- troubleshoot/elasticsearch/hotspotting.md | 26 ++++++++--------- .../increase-cluster-shard-limit.md | 14 +++++----- .../elasticsearch/increase-shard-limit.md | 14 +++++----- .../elasticsearch/increase-tier-capacity.md | 12 ++++---- .../elasticsearch/mapping-explosion.md | 18 ++++++------ .../monitoring-troubleshooting.md | 2 +- .../red-yellow-cluster-status.md | 26 ++++++++--------- .../elasticsearch/rejected-requests.md | 8 +++--- .../repeated-snapshot-failures.md | 6 ++-- .../elasticsearch/restore-from-snapshot.md | 28 +++++++++---------- .../security/security-trb-settings.md | 2 +- troubleshoot/elasticsearch/start-ilm.md | 6 ++-- troubleshoot/elasticsearch/start-slm.md | 6 ++-- .../elasticsearch/task-queue-backlog.md | 16 +++++------ .../transform-troubleshooting.md | 4 +-- .../troubleshoot-migrate-to-tiers.md | 8 +++--- .../elasticsearch/troubleshooting-searches.md | 20 ++++++------- .../troubleshooting-shards-capacity-issues.md | 18 ++++++------ .../troubleshooting-unbalanced-cluster.md | 2 +- .../troubleshooting-unstable-cluster.md | 8 +++--- troubleshoot/kibana/access.md | 2 +- troubleshoot/kibana/error-server-not-ready.md | 2 +- troubleshoot/kibana/maps.md | 2 +- .../observability/apm/known-issues.md | 4 +-- .../troubleshoot-mapping-issues.md | 2 +- troubleshoot/security/detection-rules.md | 2 +- 45 files changed, 189 insertions(+), 189 deletions(-) diff --git a/troubleshoot/deployments/cloud-on-k8s/common-problems.md b/troubleshoot/deployments/cloud-on-k8s/common-problems.md index 64f01bfa1..660edc1bd 100644 --- a/troubleshoot/deployments/cloud-on-k8s/common-problems.md +++ b/troubleshoot/deployments/cloud-on-k8s/common-problems.md @@ -180,7 +180,7 @@ Possible causes include: elasticsearch.elasticsearch.k8s.elastic.co/elasticsearch-sample yellow 1 7.9.2 Ready 3m50s ``` - In this case, you have to [check](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html) and fix your shard allocations. The [cluster health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html), [cat shards](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-shards.html), and [get Elasticsearch](../../../deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md#k8s-elasticsearch-monitor-cluster-health) APIs can assist in tracking the shard recover process. + In this case, you have to [check](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-allocation-explain) and fix your shard allocations. The [cluster health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health), [cat shards](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-shards), and [get Elasticsearch](../../../deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md#k8s-elasticsearch-monitor-cluster-health) APIs can assist in tracking the shard recover process. * Scheduling issues diff --git a/troubleshoot/elasticsearch/add-repository.md b/troubleshoot/elasticsearch/add-repository.md index 695f9413b..6537ecba0 100644 --- a/troubleshoot/elasticsearch/add-repository.md +++ b/troubleshoot/elasticsearch/add-repository.md @@ -6,7 +6,7 @@ mapped_pages: # Troubleshoot broken repositories [add-repository] -There are several situations where the [Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) might report an issue regarding the integrity of snapshot repositories in the cluster. The following pages explain the recommended actions for diagnosing corrupted, unknown, and invalid repositories: +There are several situations where the [Health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) might report an issue regarding the integrity of snapshot repositories in the cluster. The following pages explain the recommended actions for diagnosing corrupted, unknown, and invalid repositories: * [Diagnosing corrupted repositories](diagnosing-corrupted-repositories.md) * [Diagnosing unknown repositories](diagnosing-unknown-repositories.md) diff --git a/troubleshoot/elasticsearch/add-tier.md b/troubleshoot/elasticsearch/add-tier.md index e6ff9489c..0078fd041 100644 --- a/troubleshoot/elasticsearch/add-tier.md +++ b/troubleshoot/elasticsearch/add-tier.md @@ -35,7 +35,7 @@ In order to get the shards assigned we need enable a new tier in the deployment. :class: screenshot ::: -4. Determine which tier an index expects for assignment. [Retrieve](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) the configured value for the `index.routing.allocation.include._tier_preference` setting: +4. Determine which tier an index expects for assignment. [Retrieve](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings @@ -64,7 +64,7 @@ In order to get the shards assigned we need enable a new tier in the deployment. ::::::{tab-item} Self-managed In order to get the shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. -To determine which tier an index requires for assignment, use the [get index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To determine which tier an index requires for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings diff --git a/troubleshoot/elasticsearch/allow-all-cluster-allocation.md b/troubleshoot/elasticsearch/allow-all-cluster-allocation.md index 9daa090ab..4f271726a 100644 --- a/troubleshoot/elasticsearch/allow-all-cluster-allocation.md +++ b/troubleshoot/elasticsearch/allow-all-cluster-allocation.md @@ -17,7 +17,7 @@ In order to (re)allow all data to be allocated follow these steps: ::::::{tab-item} Elasticsearch Service In order to get the shards assigned we’ll need to change the value of the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-routing-allocation-enable) that restricts the assignemnt of the shards to allow all shards to be allocated. -We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) and changing the configured value to `all`. +We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and changing the configured value to `all`. **Use {{kib}}** @@ -35,7 +35,7 @@ We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.e :class: screenshot ::: -4. Inspect the `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html): +4. Inspect the `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): ```console GET /_cluster/settings?flat_settings @@ -54,7 +54,7 @@ We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.e 1. Represents the current configured value that controls if data is partially or fully allowed to be allocated in the system. -5. [Change](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-routing-allocation-enable) value to allow all the data in the system to be fully allocated: +5. [Change](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-routing-allocation-enable) value to allow all the data in the system to be fully allocated: ```console PUT _cluster/settings @@ -71,9 +71,9 @@ We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.e ::::::{tab-item} Self-managed In order to get the shards assigned we’ll need to change the value of the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-routing-allocation-enable) that restricts the assignemnt of the shards to allow all shards to be allocated. -We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) and changing the configured value to `all`. +We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and changing the configured value to `all`. -1. Inspect the `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html): +1. Inspect the `cluster.routing.allocation.enable` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): ```console GET /_cluster/settings?flat_settings @@ -92,7 +92,7 @@ We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.e 1. Represents the current configured value that controls if data is partially or fully allowed to be allocated in the system. -2. [Change](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-routing-allocation-enable) value to allow all the data in the system to be fully allocated: +2. [Change](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-routing-allocation-enable) value to allow all the data in the system to be fully allocated: ```console PUT _cluster/settings diff --git a/troubleshoot/elasticsearch/allow-all-index-allocation.md b/troubleshoot/elasticsearch/allow-all-index-allocation.md index 5de4878ad..b6fcb2c93 100644 --- a/troubleshoot/elasticsearch/allow-all-index-allocation.md +++ b/troubleshoot/elasticsearch/allow-all-index-allocation.md @@ -36,7 +36,7 @@ In order to get the shards assigned we’ll need to change the value of the [con :class: screenshot ::: -4. Inspect the `index.routing.allocation.enable` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) for the index with unassigned shards: +4. Inspect the `index.routing.allocation.enable` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: ```console GET /my-index-000001/_settings/index.routing.allocation.enable?flat_settings @@ -56,7 +56,7 @@ In order to get the shards assigned we’ll need to change the value of the [con 1. Represents the current configured value that controls if the index is allowed to be partially or totally allocated. -5. [Change](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-routing-allocation-enable-setting) value to allow the index to be fully allocated: +5. [Change](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-routing-allocation-enable-setting) value to allow the index to be fully allocated: ```console PUT /my-index-000001/_settings @@ -73,7 +73,7 @@ In order to get the shards assigned we’ll need to change the value of the [con ::::::{tab-item} Self-managed In order to get the shards assigned we’ll need to change the value of the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-routing-allocation-enable-setting) that restricts the assignemnt of the shards to `all`. -1. Inspect the `index.routing.allocation.enable` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) for the index with unassigned shards: +1. Inspect the `index.routing.allocation.enable` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: ```console GET /my-index-000001/_settings/index.routing.allocation.enable?flat_settings @@ -93,7 +93,7 @@ In order to get the shards assigned we’ll need to change the value of the [con 1. Represents the current configured value that controls if the index is allowed to be partially or totally allocated. -2. [Change](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-routing-allocation-enable-setting) value to allow the index to be fully allocated: +2. [Change](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the [configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-routing-allocation-enable-setting) value to allow the index to be fully allocated: ```console PUT /my-index-000001/_settings diff --git a/troubleshoot/elasticsearch/circuit-breaker-errors.md b/troubleshoot/elasticsearch/circuit-breaker-errors.md index 6578993c8..41dcd7d3f 100644 --- a/troubleshoot/elasticsearch/circuit-breaker-errors.md +++ b/troubleshoot/elasticsearch/circuit-breaker-errors.md @@ -47,13 +47,13 @@ Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] D If you’ve enabled Stack Monitoring, you can view JVM memory usage in {{kib}}. In the main menu, click **Stack Monitoring**. On the Stack Monitoring **Overview*** page, click ***Nodes**. The **JVM Heap** column lists the current memory usage for each node. -You can also use the [cat nodes API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html) to get the current `heap.percent` for each node. +You can also use the [cat nodes API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) to get the current `heap.percent` for each node. ```console GET _cat/nodes?v=true&h=name,node*,heap* ``` -To get the JVM memory usage for each circuit breaker, use the [node stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html). +To get the JVM memory usage for each circuit breaker, use the [node stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats). ```console GET _nodes/stats/breaker @@ -72,7 +72,7 @@ For high-cardinality `text` fields, fielddata can use a large amount of JVM memo **Clear the fielddata cache** -If you’ve triggered the fielddata circuit breaker and can’t disable fielddata, use the [clear cache API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-clearcache.html) to clear the fielddata cache. This may disrupt any in-flight searches that use fielddata. +If you’ve triggered the fielddata circuit breaker and can’t disable fielddata, use the [clear cache API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-clear-cache) to clear the fielddata cache. This may disrupt any in-flight searches that use fielddata. ```console POST _cache/clear?fielddata=true diff --git a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md index f08becb2b..0462a40ee 100644 --- a/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md +++ b/troubleshoot/elasticsearch/decrease-disk-usage-data-node.md @@ -109,7 +109,7 @@ In order to estimate how many replicas need to be removed, first you need to est green logs-000001 1 0 7.7gb 7.7gb ``` -5. In the list above we see that if we reduce the replicas to 1 of the indices `my_index` and `my_other_index` we will release the required disk space. It is not necessary to reduce the replicas of `search-products` and `logs-000001` does not have any replicas anyway. Reduce the replicas of one or more indices with the [index update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html): +5. In the list above we see that if we reduce the replicas to 1 of the indices `my_index` and `my_other_index` we will release the required disk space. It is not necessary to reduce the replicas of `search-products` and `logs-000001` does not have any replicas anyway. Reduce the replicas of one or more indices with the [index update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings): ::::{warning} Reducing the replicas of an index can potentially reduce search throughput and data redundancy. diff --git a/troubleshoot/elasticsearch/diagnose-unassigned-shards.md b/troubleshoot/elasticsearch/diagnose-unassigned-shards.md index 34ad91e3a..61c86f0a4 100644 --- a/troubleshoot/elasticsearch/diagnose-unassigned-shards.md +++ b/troubleshoot/elasticsearch/diagnose-unassigned-shards.md @@ -33,7 +33,7 @@ In order to diagnose the unassigned shards, follow the next steps: :class: screenshot ::: -4. View the unassigned shards using the [cat shards API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-shards.html). +4. View the unassigned shards using the [cat shards API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-shards). ```console GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state @@ -58,7 +58,7 @@ In order to diagnose the unassigned shards, follow the next steps: The index in the example has a primary shard unassigned. -5. To understand why an unassigned shard is not being assigned and what action you must take to allow {{es}} to assign it, use the [cluster allocation explanation API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html). +5. To understand why an unassigned shard is not being assigned and what action you must take to allow {{es}} to assign it, use the [cluster allocation explanation API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-allocation-explain). ```console GET _cluster/allocation/explain @@ -117,7 +117,7 @@ In order to diagnose the unassigned shards, follow the next steps: 5. The decider which led to the `no` decision for the node. 6. An explanation as to why the decider returned a `no` decision, with a helpful hint pointing to the setting that led to the decision. -6. The explanation in our case indicates the index allocation configurations are not correct. To review your allocation settings, use the [get index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) and [cluster get settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) APIs. +6. The explanation in our case indicates the index allocation configurations are not correct. To review your allocation settings, use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and [cluster get settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) APIs. ```console GET my-index-000001/_settings?flat_settings=true&include_defaults=true @@ -125,7 +125,7 @@ In order to diagnose the unassigned shards, follow the next steps: GET _cluster/settings?flat_settings=true&include_defaults=true ``` -7. Change the settings using the [update index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) and [cluster update settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) APIs to the correct values in order to allow the index to be allocated. +7. Change the settings using the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) and [cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) APIs to the correct values in order to allow the index to be allocated. For more guidance on fixing the most common causes for unassinged shards please follow [this guide](red-yellow-cluster-status.md#fix-red-yellow-cluster-status) or contact [Elastic Support](https://support.elastic.co). :::::: @@ -133,7 +133,7 @@ For more guidance on fixing the most common causes for unassinged shards please ::::::{tab-item} Self-managed In order to diagnose the unassigned shards follow the next steps: -1. View the unassigned shards using the [cat shards API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-shards.html). +1. View the unassigned shards using the [cat shards API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-shards). ```console GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state @@ -158,7 +158,7 @@ In order to diagnose the unassigned shards follow the next steps: The index in the example has a primary shard unassigned. -2. To understand why an unassigned shard is not being assigned and what action you must take to allow {{es}} to assign it, use the [cluster allocation explanation API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html). +2. To understand why an unassigned shard is not being assigned and what action you must take to allow {{es}} to assign it, use the [cluster allocation explanation API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-allocation-explain). ```console GET _cluster/allocation/explain @@ -217,7 +217,7 @@ In order to diagnose the unassigned shards follow the next steps: 5. The decider which led to the `no` decision for the node. 6. An explanation as to why the decider returned a `no` decision, with a helpful hint pointing to the setting that led to the decision. -3. The explanation in our case indicates the index allocation configurations are not correct. To review your allocation settings, use the [get index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) and [cluster get settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) APIs. +3. The explanation in our case indicates the index allocation configurations are not correct. To review your allocation settings, use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and [cluster get settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) APIs. ```console GET my-index-000001/_settings?flat_settings=true&include_defaults=true @@ -225,7 +225,7 @@ In order to diagnose the unassigned shards follow the next steps: GET _cluster/settings?flat_settings=true&include_defaults=true ``` -4. Change the settings using the [update index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) and [cluster update settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) APIs to the correct values in order to allow the index to be allocated. +4. Change the settings using the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) and [cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) APIs to the correct values in order to allow the index to be allocated. For more guidance on fixing the most common causes for unassinged shards please follow [this guide](red-yellow-cluster-status.md#fix-red-yellow-cluster-status). :::::: diff --git a/troubleshoot/elasticsearch/diagnosing-unknown-repositories.md b/troubleshoot/elasticsearch/diagnosing-unknown-repositories.md index 1d71c1c84..7e68cacc2 100644 --- a/troubleshoot/elasticsearch/diagnosing-unknown-repositories.md +++ b/troubleshoot/elasticsearch/diagnosing-unknown-repositories.md @@ -9,6 +9,6 @@ mapped_pages: When a snapshot repository is marked as "unknown", it means that an {{es}} node is unable to instantiate the repository due to an unknown repository type. This is usually caused by a missing plugin on the node. Make sure each node in the cluster has the required plugins by following the following steps: 1. Retrieve the affected nodes from the affected resources section of the health report. -2. Use the [nodes info API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-info.html) to retrieve the plugins installed on each node. +2. Use the [nodes info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-info) to retrieve the plugins installed on each node. 3. Cross reference this with a node that works correctly to find out which plugins are missing and install the missing plugins. diff --git a/troubleshoot/elasticsearch/diagnostic.md b/troubleshoot/elasticsearch/diagnostic.md index 7e0bc53ef..36148f8fe 100644 --- a/troubleshoot/elasticsearch/diagnostic.md +++ b/troubleshoot/elasticsearch/diagnostic.md @@ -41,7 +41,7 @@ You can also directly download the `diagnostics-X.X.X-dist.zip` file for the lat To capture an {{es}} diagnostic: -1. In a terminal, verify that your network and user permissions are sufficient to connect to your {{es}} cluster by polling the cluster’s [health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html). +1. In a terminal, verify that your network and user permissions are sufficient to connect to your {{es}} cluster by polling the cluster’s [health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health). For example, with the parameters `host:localhost`, `port:9200`, and `username:elastic`, you’d use the following curl request: diff --git a/troubleshoot/elasticsearch/discovery-troubleshooting.md b/troubleshoot/elasticsearch/discovery-troubleshooting.md index 4a454a398..4798c7ccd 100644 --- a/troubleshoot/elasticsearch/discovery-troubleshooting.md +++ b/troubleshoot/elasticsearch/discovery-troubleshooting.md @@ -23,7 +23,7 @@ When a node wins the master election, it logs a message containing `elected-as-m If there is no elected master node and no node can win an election, all nodes will repeatedly log messages about the problem using a logger called `org.elasticsearch.cluster.coordination.ClusterFormationFailureHelper`. By default, this happens every 10 seconds. -Master elections only involve master-eligible nodes, so focus your attention on the master-eligible nodes in this situation. These nodes' logs will indicate the requirements for a master election, such as the discovery of a certain set of nodes. The [Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) API on these nodes will also provide useful information about the situation. +Master elections only involve master-eligible nodes, so focus your attention on the master-eligible nodes in this situation. These nodes' logs will indicate the requirements for a master election, such as the discovery of a certain set of nodes. The [Health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) API on these nodes will also provide useful information about the situation. If the logs or the health report indicate that {{es}} can’t discover enough nodes to form a quorum, you must address the reasons preventing {{es}} from discovering the missing nodes. The missing nodes are needed to reconstruct the cluster metadata. Without the cluster metadata, the data in your cluster is meaningless. The cluster metadata is stored on a subset of the master-eligible nodes in the cluster. If a quorum can’t be discovered, the missing nodes were the ones holding the cluster metadata. @@ -38,7 +38,7 @@ If the logs suggest that discovery or master elections are failing due to timeou * Packet captures will reveal system-level and network-level faults, especially if you capture the network traffic simultaneously at all relevant nodes and analyse it alongside the {{es}} logs from those nodes. You should be able to observe any retransmissions, packet loss, or other delays on the connections between the nodes. * Long waits for particular threads to be available can be identified by taking stack dumps of the main {{es}} process (for example, using `jstack`) or a profiling trace (for example, using Java Flight Recorder) in the few seconds leading up to the relevant log message. - The [Nodes hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. + The [Nodes hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. The threads involved in discovery and cluster membership are mainly `transport_worker` and `cluster_coordination` threads, for which there should never be a long wait. There may also be evidence of long waits for threads in the {{es}} logs, particularly looking at warning logs from `org.elasticsearch.transport.InboundHandler`. See [Networking threading model](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network-threading-model) for more information. @@ -53,7 +53,7 @@ When a node wins the master election, it logs a message containing `elected-as-m * Packet captures will reveal system-level and network-level faults, especially if you capture the network traffic simultaneously at all relevant nodes and analyse it alongside the {{es}} logs from those nodes. You should be able to observe any retransmissions, packet loss, or other delays on the connections between the nodes. * Long waits for particular threads to be available can be identified by taking stack dumps of the main {{es}} process (for example, using `jstack`) or a profiling trace (for example, using Java Flight Recorder) in the few seconds leading up to the relevant log message. - The [Nodes hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. + The [Nodes hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. The threads involved in discovery and cluster membership are mainly `transport_worker` and `cluster_coordination` threads, for which there should never be a long wait. There may also be evidence of long waits for threads in the {{es}} logs, particularly looking at warning logs from `org.elasticsearch.transport.InboundHandler`. See [Networking threading model](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network-threading-model) for more information. @@ -61,14 +61,14 @@ When a node wins the master election, it logs a message containing `elected-as-m ## Node cannot discover or join stable master [discovery-cannot-join-master] -If there is a stable elected master but a node can’t discover or join its cluster, it will repeatedly log messages about the problem using the `ClusterFormationFailureHelper` logger. The [Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) API on the affected node will also provide useful information about the situation. Other log messages on the affected node and the elected master may provide additional information about the problem. If the logs suggest that the node cannot discover or join the cluster due to timeouts or network-related issues then narrow down the problem as follows. +If there is a stable elected master but a node can’t discover or join its cluster, it will repeatedly log messages about the problem using the `ClusterFormationFailureHelper` logger. The [Health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) API on the affected node will also provide useful information about the situation. Other log messages on the affected node and the elected master may provide additional information about the problem. If the logs suggest that the node cannot discover or join the cluster due to timeouts or network-related issues then narrow down the problem as follows. * GC pauses are recorded in the GC logs that {{es}} emits by default, and also usually by the `JvmMonitorService` in the main node logs. Use these logs to confirm whether or not the node is experiencing high heap usage with long GC pauses. If so, [the troubleshooting guide for high heap usage](high-jvm-memory-pressure.md) has some suggestions for further investigation but typically you will need to capture a heap dump and the [garbage collector logs](https://www.elastic.co/guide/en/elasticsearch/reference/current/advanced-configuration.html#gc-logging) during a time of high heap usage to fully understand the problem. * VM pauses also affect other processes on the same host. A VM pause also typically causes a discontinuity in the system clock, which {{es}} will report in its logs. If you see evidence of other processes pausing at the same time, or unexpected clock discontinuities, investigate the infrastructure on which you are running {{es}}. * Packet captures will reveal system-level and network-level faults, especially if you capture the network traffic simultaneously at all relevant nodes and analyse it alongside the {{es}} logs from those nodes. You should be able to observe any retransmissions, packet loss, or other delays on the connections between the nodes. * Long waits for particular threads to be available can be identified by taking stack dumps of the main {{es}} process (for example, using `jstack`) or a profiling trace (for example, using Java Flight Recorder) in the few seconds leading up to the relevant log message. - The [Nodes hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. + The [Nodes hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. The threads involved in discovery and cluster membership are mainly `transport_worker` and `cluster_coordination` threads, for which there should never be a long wait. There may also be evidence of long waits for threads in the {{es}} logs, particularly looking at warning logs from `org.elasticsearch.transport.InboundHandler`. See [Networking threading model](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network-threading-model) for more information. diff --git a/troubleshoot/elasticsearch/elasticsearch-reference.md b/troubleshoot/elasticsearch/elasticsearch-reference.md index c56b32f9c..891b3758a 100644 --- a/troubleshoot/elasticsearch/elasticsearch-reference.md +++ b/troubleshoot/elasticsearch/elasticsearch-reference.md @@ -17,7 +17,7 @@ If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your ## General [troubleshooting-general] * [Fix common cluster issues](fix-common-cluster-issues.md) -* Several troubleshooting issues can be diagnosed using the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html). +* Several troubleshooting issues can be diagnosed using the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report). ## Data [troubleshooting-data] diff --git a/troubleshoot/elasticsearch/elasticsearch-reference/index-lifecycle-management-errors.md b/troubleshoot/elasticsearch/elasticsearch-reference/index-lifecycle-management-errors.md index e4c44bb1e..2a741cf05 100644 --- a/troubleshoot/elasticsearch/elasticsearch-reference/index-lifecycle-management-errors.md +++ b/troubleshoot/elasticsearch/elasticsearch-reference/index-lifecycle-management-errors.md @@ -46,7 +46,7 @@ PUT /my-index-000001 After five days, {{ilm-init}} attempts to shrink `my-index-000001` from two shards to four shards. Because the shrink action cannot *increase* the number of shards, this operation fails and {{ilm-init}} moves `my-index-000001` to the `ERROR` step. -You can use the [{{ilm-init}} Explain API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-explain-lifecycle.html) to get information about what went wrong: +You can use the [{{ilm-init}} Explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-explain-lifecycle) to get information about what went wrong: ```console GET /my-index-000001/_ilm/explain @@ -133,7 +133,7 @@ Once you fix the problem that put an index in the `ERROR` step, you might need t POST /my-index-000001/_ilm/retry ``` -{{ilm-init}} subsequently attempts to re-run the step that failed. You can use the [{{ilm-init}} Explain API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-explain-lifecycle.html) to monitor the progress. +{{ilm-init}} subsequently attempts to re-run the step that failed. You can use the [{{ilm-init}} Explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-explain-lifecycle) to monitor the progress. ## Common {{ilm-init}} setting issues [_common_ilm_init_setting_issues] @@ -143,7 +143,7 @@ POST /my-index-000001/_ilm/retry When setting up an [{{ilm-init}} policy](../../../manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md) or [automating rollover with {{ilm-init}}](../../../manage-data/lifecycle/index-lifecycle-management.md), be aware that `min_age` can be relative to either the rollover time or the index creation time. -If you use [{{ilm-init}} rollover](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-rollover.html), `min_age` is calculated relative to the time the index was rolled over. This is because the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html) generates a new index and updates the `age` of the previous index to reflect the rollover time. If the index hasn’t been rolled over, then the `age` is the same as the `creation_date` for the index. +If you use [{{ilm-init}} rollover](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-rollover.html), `min_age` is calculated relative to the time the index was rolled over. This is because the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover) generates a new index and updates the `age` of the previous index to reflect the rollover time. If the index hasn’t been rolled over, then the `age` is the same as the `creation_date` for the index. You can override how `min_age` is calculated using the `index.lifecycle.origination_date` and `index.lifecycle.parse_origination_date` [{{ilm-init}} settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-settings.html). @@ -160,7 +160,7 @@ Problems with rollover aliases are a common cause of errors. Consider using [dat ### Rollover alias [x] can point to multiple indices, found duplicated alias [x] in index template [z] [_rollover_alias_x_can_point_to_multiple_indices_found_duplicated_alias_x_in_index_template_z] -The target rollover alias is specified in an index template’s `index.lifecycle.rollover_alias` setting. You need to explicitly configure this alias *one time* when you [bootstrap the initial index](../../../manage-data/lifecycle/index-lifecycle-management.md#ilm-gs-alias-bootstrap). The rollover action then manages setting and updating the alias to [roll over](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html#rollover-index-api-desc) to each subsequent index. +The target rollover alias is specified in an index template’s `index.lifecycle.rollover_alias` setting. You need to explicitly configure this alias *one time* when you [bootstrap the initial index](../../../manage-data/lifecycle/index-lifecycle-management.md#ilm-gs-alias-bootstrap). The rollover action then manages setting and updating the alias to [roll over](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover#rollover-index-api-desc) to each subsequent index. Do not explicitly configure this same alias in the aliases section of an index template. @@ -171,7 +171,7 @@ See this [resolving `duplicate alias` video](https://www.youtube.com/watch?v=Ww5 Either the index is using the wrong alias or the alias does not exist. -Check the `index.lifecycle.rollover_alias` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html). To see what aliases are configured, use [_cat/aliases](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-alias.html). +Check the `index.lifecycle.rollover_alias` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings). To see what aliases are configured, use [_cat/aliases](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-aliases). See this [resolving `not point to index` video](https://www.youtube.com/watch?v=NKSe67x7aw8) for an example troubleshooting walkthrough. @@ -189,7 +189,7 @@ See this [resolving `empty or not defined` video](https://www.youtube.com/watch? Only one index can be designated as the write index for a particular alias. -Use the [aliases](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html) API to set `is_write_index:false` for all but one index. +Use the [aliases](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-update-aliases) API to set `is_write_index:false` for all but one index. See this [resolving `more than one write index` video](https://www.youtube.com/watch?v=jCUvZCT5Hm4) for an example troubleshooting walkthrough. diff --git a/troubleshoot/elasticsearch/elasticsearch-reference/remote-clusters.md b/troubleshoot/elasticsearch/elasticsearch-reference/remote-clusters.md index 68106b938..94d749ede 100644 --- a/troubleshoot/elasticsearch/elasticsearch-reference/remote-clusters.md +++ b/troubleshoot/elasticsearch/elasticsearch-reference/remote-clusters.md @@ -15,7 +15,7 @@ You may encounter several issues when setting up a remote cluster for {{ccr}} or ### Checking whether a remote cluster has connected successfully [remote-clusters-troubleshooting-check-connection] -A successful call to the cluster settings update API for adding or updating remote clusters does not necessarily mean the configuration is successful. Use the [remote cluster info API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-remote-info.html) to verify that a local cluster is successfully connected to a remote cluster. +A successful call to the cluster settings update API for adding or updating remote clusters does not necessarily mean the configuration is successful. Use the [remote cluster info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) to verify that a local cluster is successfully connected to a remote cluster. ```console GET /_remote/info @@ -238,7 +238,7 @@ Check the port number and ensure you are indeed connecting to the remote cluster ### Connecting without a cross-cluster API key [remote-clusters-troubleshooting-no-api-key] -A local cluster uses the presence of a cross-cluster API key to determine the model with which it connects to a remote cluster. If a cross-cluster API key is present, it uses API key based authentication. Otherwise, it uses certificate based authentication. You can check what model is being used with the [remote cluster info API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-remote-info.html) on the local cluster: +A local cluster uses the presence of a cross-cluster API key to determine the model with which it connects to a remote cluster. If a cross-cluster API key is present, it uses API key based authentication. Otherwise, it uses certificate based authentication. You can check what model is being used with the [remote cluster info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) on the local cluster: ```console GET /_remote/info @@ -302,13 +302,13 @@ This does not show up in the logs of the remote cluster. #### Resolution [_resolution_5] -Add the cross-cluster API key to {{es}} keystore on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-reload-secure-settings.html) API to reload the keystore. +Add the cross-cluster API key to {{es}} keystore on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. ### Using the wrong API key type [remote-clusters-troubleshooting-wrong-api-key-type] -API key based authentication requires [cross-cluster API keys](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-cross-cluster-api-key.html). It does not work with [REST API keys](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html). +API key based authentication requires [cross-cluster API keys](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). It does not work with [REST API keys](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key). #### Symptom [_symptom_5] @@ -325,7 +325,7 @@ This does not show up in the logs of the remote cluster. #### Resolution [_resolution_6] -Ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-cross-cluster-api-key.html). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-reload-secure-settings.html) API to reload the keystore. +Ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. @@ -352,7 +352,7 @@ The remote cluster logs `Authentication using apikey failed`: #### Resolution [_resolution_7] -Ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-cross-cluster-api-key.html). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-reload-secure-settings.html) API to reload the keystore. +Ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. @@ -377,7 +377,7 @@ This does not show up in any logs. #### Resolution [_resolution_8] 1. Check that the local user has the necessary `remote_indices` or `remote_cluster` privileges. Grant sufficient `remote_indices` or `remote_cluster` privileges if necessary. -2. If permission is not an issue locally, ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-cross-cluster-api-key.html). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-reload-secure-settings.html) API to reload the keystore. +2. If permission is not an issue locally, ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. diff --git a/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md b/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md index b7c8662d7..5e11d9d95 100644 --- a/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md +++ b/troubleshoot/elasticsearch/fix-master-node-out-of-disk.md @@ -8,7 +8,7 @@ mapped_pages: # Fix master nodes out of disk [fix-master-node-out-of-disk] -{{es}} is using master nodes to coordinate the cluster. If the master or any master eligible nodes are running out of space, you need to ensure that they have enough disk space to function. If the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) reports that your master node is out of space you need to increase the disk capacity of your master nodes. +{{es}} is using master nodes to coordinate the cluster. If the master or any master eligible nodes are running out of space, you need to ensure that they have enough disk space to function. If the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) reports that your master node is out of space you need to increase the disk capacity of your master nodes. :::::::{tab-set} diff --git a/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md b/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md index 136dac52c..813f235ed 100644 --- a/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md +++ b/troubleshoot/elasticsearch/fix-other-node-out-of-disk.md @@ -8,7 +8,7 @@ mapped_pages: # Fix other role nodes out of disk [fix-other-node-out-of-disk] -{{es}} can use dedicated nodes to execute other functions apart from storing data or coordinating the cluster, for example machine learning. If one or more of these nodes are running out of space, you need to ensure that they have enough disk space to function. If the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) reports that a node that is not a master and does not contain data is out of space you need to increase the disk capacity of this node. +{{es}} can use dedicated nodes to execute other functions apart from storing data or coordinating the cluster, for example machine learning. If one or more of these nodes are running out of space, you need to ensure that they have enough disk space to function. If the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) reports that a node that is not a master and does not contain data is out of space you need to increase the disk capacity of this node. :::::::{tab-set} diff --git a/troubleshoot/elasticsearch/fix-watermark-errors.md b/troubleshoot/elasticsearch/fix-watermark-errors.md index 6a68c4189..eafc37669 100644 --- a/troubleshoot/elasticsearch/fix-watermark-errors.md +++ b/troubleshoot/elasticsearch/fix-watermark-errors.md @@ -23,7 +23,7 @@ If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your ## Monitor rebalancing [fix-watermark-errors-rebalance] -To verify that shards are moving off the affected node until it falls below high watermark., use the [cat shards API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-shards.html) and [cat recovery API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-recovery.html): +To verify that shards are moving off the affected node until it falls below high watermark., use the [cat shards API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-shards) and [cat recovery API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-recovery): ```console GET _cat/shards?v=true @@ -31,7 +31,7 @@ GET _cat/shards?v=true GET _cat/recovery?v=true&active_only=true ``` -If shards remain on the node keeping it about high watermark, use the [cluster allocation explanation API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html) to get an explanation for their allocation status. +If shards remain on the node keeping it about high watermark, use the [cluster allocation explanation API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-allocation-explain) to get an explanation for their allocation status. ```console GET _cluster/allocation/explain @@ -93,11 +93,11 @@ To resolve watermark errors permanently, perform one of the following actions: * Horizontally scale nodes of the affected [data tiers](../../manage-data/lifecycle/data-tiers.md). * Vertically scale existing nodes to increase disk space. -* Delete indices using the [delete index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html), either permanently if the index isn’t needed, or temporarily to later [restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). +* Delete indices using the [delete index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete), either permanently if the index isn’t needed, or temporarily to later [restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md). * update related [ILM policy](../../manage-data/lifecycle/index-lifecycle-management.md) to push indices through to later [data tiers](../../manage-data/lifecycle/data-tiers.md) ::::{tip} -On {{ess}} and {{ece}}, indices may need to be temporarily deleted via its [Elasticsearch API Console](https://www.elastic.co/guide/en/cloud/current/ec-api-console.html) to later [snapshot restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) in order to resolve [cluster health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html) `status:red` which will block [attempted changes](../../deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md). If you experience issues with this resolution flow on {{ess}}, kindly reach out to [Elastic Support](https://support.elastic.co) for assistance. +On {{ess}} and {{ece}}, indices may need to be temporarily deleted via its [Elasticsearch API Console](https://www.elastic.co/guide/en/cloud/current/ec-api-console.html) to later [snapshot restore](../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) in order to resolve [cluster health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health) `status:red` which will block [attempted changes](../../deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md). If you experience issues with this resolution flow on {{ess}}, kindly reach out to [Elastic Support](https://support.elastic.co) for assistance. :::: diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 31d17bbef..34777a6f2 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -23,7 +23,7 @@ If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your **Check CPU usage** -You can check the CPU usage per node using the [cat nodes API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html): +You can check the CPU usage per node using the [cat nodes API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes): ```console GET _cat/nodes?v=true&s=cpu:desc @@ -58,7 +58,7 @@ To track CPU usage over time, we recommend enabling monitoring: ::::::: **Check hot threads** -If a node has high CPU usage, use the [nodes hot threads API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) to check for resource-intensive threads running on the node. +If a node has high CPU usage, use the [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) to check for resource-intensive threads running on the node. ```console GET _nodes/hot_threads @@ -77,11 +77,11 @@ Heavy indexing and search loads can deplete smaller thread pools. To better hand **Spread out bulk requests** -While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) or [multi-search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) requests still require CPU resources. If possible, submit smaller requests and allow more time between them. +While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) or [multi-search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) requests still require CPU resources. If possible, submit smaller requests and allow more time between them. **Cancel long-running searches** -Long-running searches can block threads in the `search` thread pool. To check for these searches, use the [task management API](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html). +Long-running searches can block threads in the `search` thread pool. To check for these searches, use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks). ```console GET _tasks?actions=*search&detailed diff --git a/troubleshoot/elasticsearch/high-jvm-memory-pressure.md b/troubleshoot/elasticsearch/high-jvm-memory-pressure.md index 81fe9b51e..7fd5572bb 100644 --- a/troubleshoot/elasticsearch/high-jvm-memory-pressure.md +++ b/troubleshoot/elasticsearch/high-jvm-memory-pressure.md @@ -23,7 +23,7 @@ If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your ::::::{tab-item} Elasticsearch Service From your deployment menu, click **Elasticsearch**. Under **Instances**, each instance displays a **JVM memory pressure** indicator. When the JVM memory pressure reaches 75%, the indicator turns red. -You can also use the [nodes stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html) to calculate the current JVM memory pressure for each node. +You can also use the [nodes stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats) to calculate the current JVM memory pressure for each node. ```console GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old @@ -35,7 +35,7 @@ JVM Memory Pressure = `used_in_bytes` / `max_in_bytes` :::::: ::::::{tab-item} Self-managed -To calculate the current JVM memory pressure for each node, use the [nodes stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html). +To calculate the current JVM memory pressure for each node, use the [nodes stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats). ```console GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old @@ -101,7 +101,7 @@ Defining too many fields or nesting fields too deeply can lead to [mapping explo **Spread out bulk requests** -While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) or [multi-search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them. +While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) or [multi-search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them. **Upgrade node memory** diff --git a/troubleshoot/elasticsearch/hotspotting.md b/troubleshoot/elasticsearch/hotspotting.md index 0dfe02d9e..bedbd4a00 100644 --- a/troubleshoot/elasticsearch/hotspotting.md +++ b/troubleshoot/elasticsearch/hotspotting.md @@ -22,7 +22,7 @@ See [this video](https://www.youtube.com/watch?v=Q5ODJ5nIKAM) for a walkthrough ## Detect hot spotting [detect] -Hot spotting most commonly surfaces as significantly elevated resource utilization (of `disk.percent`, `heap.percent`, or `cpu`) among a subset of nodes as reported via [cat nodes](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html). Individual spikes aren’t necessarily problematic, but if utilization repeatedly spikes or consistently remains high over time (for example longer than 30 seconds), the resource may be experiencing problematic hot spotting. +Hot spotting most commonly surfaces as significantly elevated resource utilization (of `disk.percent`, `heap.percent`, or `cpu`) among a subset of nodes as reported via [cat nodes](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes). Individual spikes aren’t necessarily problematic, but if utilization repeatedly spikes or consistently remains high over time (for example longer than 30 seconds), the resource may be experiencing problematic hot spotting. For example, let’s show case two separate plausible issues using cat nodes: @@ -65,7 +65,7 @@ Here are some common improper hardware setups which may contribute to hot spotti ### Node level [causes-shards-nodes] -You can check for shard balancing via [cat allocation](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-allocation.html), though as of version 8.6, [desired balancing](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html) may no longer fully expect to balance shards. Kindly note, both methods may temporarily show problematic imbalance during [cluster stability issues](../../deploy-manage/distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md). +You can check for shard balancing via [cat allocation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-allocation), though as of version 8.6, [desired balancing](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html) may no longer fully expect to balance shards. Kindly note, both methods may temporarily show problematic imbalance during [cluster stability issues](../../deploy-manage/distributed-architecture/discovery-cluster-formation/cluster-fault-detection.md). For example, let’s showcase two separate plausible issues using cat allocation: @@ -82,13 +82,13 @@ node_2 31 52 44.6gb 372.7gb node_3 445 43 271.5gb 289.4gb ``` -Here we see two significantly unique situations. `node_2` has recently restarted, so it has a much lower number of shards than all other nodes. This also relates to `disk.indices` being much smaller than `disk.used` while shards are recovering as seen via [cat recovery](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-recovery.html). While `node_2`'s shard count is low, it may become a write hot spot due to ongoing [ILM rollovers](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-rollover.html). This is a common root cause of write hot spots covered in the next section. +Here we see two significantly unique situations. `node_2` has recently restarted, so it has a much lower number of shards than all other nodes. This also relates to `disk.indices` being much smaller than `disk.used` while shards are recovering as seen via [cat recovery](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-recovery). While `node_2`'s shard count is low, it may become a write hot spot due to ongoing [ILM rollovers](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-rollover.html). This is a common root cause of write hot spots covered in the next section. The second situation is that `node_3` has a higher `disk.percent` than `node_1`, even though they hold roughly the same number of shards. This occurs when either shards are not evenly sized (refer to [Aim for shards of up to 200M documents, or with sizes between 10GB and 50GB](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#shard-size-recommendation)) or when there are a lot of empty indices. Cluster rebalancing based on desired balance does much of the heavy lifting of keeping nodes from hot spotting. It can be limited by either nodes hitting [watermarks](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#disk-based-shard-allocation) (refer to [fixing disk watermark errors](fix-watermark-errors.md)) or by a write-heavy index’s total shards being much lower than the written-to nodes. -You can confirm hot spotted nodes via [the nodes stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html), potentially polling twice over time to only checking for the stats differences between them rather than polling once giving you stats for the node’s full [node uptime](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-usage.html). For example, to check all nodes indexing stats: +You can confirm hot spotted nodes via [the nodes stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats), potentially polling twice over time to only checking for the stats differences between them rather than polling once giving you stats for the node’s full [node uptime](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-usage). For example, to check all nodes indexing stats: ```console GET _nodes/stats?human&filter_path=nodes.*.name,nodes.*.indices.indexing @@ -97,7 +97,7 @@ GET _nodes/stats?human&filter_path=nodes.*.name,nodes.*.indices.indexing ### Index level [causes-shards-index] -Hot spotted nodes frequently surface via [cat thread pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html)'s `write` and `search` queue backups. For example: +Hot spotted nodes frequently surface via [cat thread pool](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-thread-pool)'s `write` and `search` queue backups. For example: ```console GET _cat/thread_pool/write,search?v=true&s=n,nn&h=n,nn,q,a,r,c @@ -117,9 +117,9 @@ write node_3 1 5 0 8714 Here you can see two significantly unique situations. Firstly, `node_1` has a severely backed up write queue compared to other nodes. Secondly, `node_3` shows historically completed writes that are double any other node. These are both probably due to either poorly distributed write-heavy indices, or to multiple write-heavy indices allocated to the same node. Since primary and replica writes are majorly the same amount of cluster work, we usually recommend setting [`index.routing.allocation.total_shards_per_node`](https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-total-shards.html#total-shards-per-node) to force index spreading after lining up index shard counts to total nodes. -We normally recommend heavy-write indices have sufficient primary `number_of_shards` and replica `number_of_replicas` to evenly spread across indexing nodes. Alternatively, you can [reroute](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html) shards to more quiet nodes to alleviate the nodes with write hot spotting. +We normally recommend heavy-write indices have sufficient primary `number_of_shards` and replica `number_of_replicas` to evenly spread across indexing nodes. Alternatively, you can [reroute](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-reroute) shards to more quiet nodes to alleviate the nodes with write hot spotting. -If it’s non-obvious what indices are problematic, you can introspect further via [the index stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html) by running: +If it’s non-obvious what indices are problematic, you can introspect further via [the index stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-stats) by running: ```console GET _stats?level=shards&human&expand_wildcards=all&filter_path=indices.*.total.indexing.index_total @@ -143,13 +143,13 @@ cat shard_stats.json | jq -rc 'sort_by(-.avg_indexing)[]' | head ## Task loads [causes-tasks] -Shard distribution problems will most-likely surface as task load as seen above in the [cat thread pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html) example. It is also possible for tasks to hot spot a node either due to individual qualitative expensiveness or overall quantitative traffic loads. +Shard distribution problems will most-likely surface as task load as seen above in the [cat thread pool](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-thread-pool) example. It is also possible for tasks to hot spot a node either due to individual qualitative expensiveness or overall quantitative traffic loads. -For example, if [cat thread pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html) reported a high queue on the `warmer` [thread pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html), you would look-up the effected node’s [hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html). Let’s say it reported `warmer` threads at `100% cpu` related to `GlobalOrdinalsBuilder`. This would let you know to inspect [field data’s global ordinals](https://www.elastic.co/guide/en/elasticsearch/reference/current/eager-global-ordinals.html). +For example, if [cat thread pool](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-thread-pool) reported a high queue on the `warmer` [thread pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html), you would look-up the effected node’s [hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads). Let’s say it reported `warmer` threads at `100% cpu` related to `GlobalOrdinalsBuilder`. This would let you know to inspect [field data’s global ordinals](https://www.elastic.co/guide/en/elasticsearch/reference/current/eager-global-ordinals.html). -Alternatively, let’s say [cat nodes](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html) shows a hot spotted master node and [cat thread pool](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html) shows general queuing across nodes. This would suggest the master node is overwhelmed. To resolve this, first ensure [hardware high availability](../../deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md) setup and then look to ephemeral causes. In this example, [the nodes hot threads API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) reports multiple threads in `other` which indicates they’re waiting on or blocked by either garbage collection or I/O. +Alternatively, let’s say [cat nodes](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) shows a hot spotted master node and [cat thread pool](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-thread-pool) shows general queuing across nodes. This would suggest the master node is overwhelmed. To resolve this, first ensure [hardware high availability](../../deploy-manage/production-guidance/availability-and-resilience/resilience-in-small-clusters.md) setup and then look to ephemeral causes. In this example, [the nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) reports multiple threads in `other` which indicates they’re waiting on or blocked by either garbage collection or I/O. -For either of these example situations, a good way to confirm the problematic tasks is to look at longest running non-continuous (designated `[c]`) tasks via [cat task management](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-tasks.html). This can be supplemented checking longest running cluster sync tasks via [cat pending tasks](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-pending-tasks.html). Using a third example, +For either of these example situations, a good way to confirm the problematic tasks is to look at longest running non-continuous (designated `[c]`) tasks via [cat task management](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-tasks). This can be supplemented checking longest running cluster sync tasks via [cat pending tasks](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-pending-tasks). Using a third example, ```console GET _cat/tasks?v&s=time:desc&h=type,action,running_time,node,cancellable @@ -163,7 +163,7 @@ direct indices:data/read/eql 10m node_1 true ... ``` -This surfaces a problematic [EQL query](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-search-api.html). We can gain further insight on it via [the task management API](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html), +This surfaces a problematic [EQL query](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search). We can gain further insight on it via [the task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks), ```console GET _tasks?human&detailed @@ -175,5 +175,5 @@ Its response contains a `description` that reports this query: indices[winlogbeat-*,logs-window*], sequence by winlog.computer_name with maxspan=1m\n\n[authentication where host.os.type == "windows" and event.action:"logged-in" and\n event.outcome == "success" and process.name == "svchost.exe" ] by winlog.event_data.TargetLogonId ``` -This lets you know which indices to check (`winlogbeat-*,logs-window*`), as well as the [EQL search](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-search-api.html) request body. Most likely this is [SIEM related](https://www.elastic.co/guide/en/security/current/es-overview.html). You can combine this with [audit logging](../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) as needed to trace the request source. +This lets you know which indices to check (`winlogbeat-*,logs-window*`), as well as the [EQL search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search) request body. Most likely this is [SIEM related](https://www.elastic.co/guide/en/security/current/es-overview.html). You can combine this with [audit logging](../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) as needed to trace the request source. diff --git a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md index 65496c265..aab5dde60 100644 --- a/troubleshoot/elasticsearch/increase-cluster-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-cluster-shard-limit.md @@ -17,7 +17,7 @@ In order to fix this follow the next steps: :::::::{tab-set} ::::::{tab-item} Elasticsearch Service -In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node in the cluster. We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) and increasing the configured value. +In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node in the cluster. We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value. **Use {{kib}}** @@ -35,7 +35,7 @@ In order to get the shards assigned we’ll need to increase the number of shard :class: screenshot ::: -4. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html): +4. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings): ```console GET /_cluster/settings?flat_settings @@ -54,7 +54,7 @@ In order to get the shards assigned we’ll need to increase the number of shard 1. Represents the current configured value for the total number of shards that can reside on one node in the system. -5. [Increase](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) the value for the total number of shards that can be assigned on one node to a higher value: +5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: ```console PUT _cluster/settings @@ -71,7 +71,7 @@ In order to get the shards assigned we’ll need to increase the number of shard ::::::{tab-item} Self-managed In order to get the shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. -To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings @@ -92,9 +92,9 @@ The response will look like this: 1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) and increasing the configured value: +Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value: -1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) for the index with unassigned shards: +1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) for the index with unassigned shards: ```console GET /_cluster/settings?flat_settings @@ -113,7 +113,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec 1. Represents the current configured value for the total number of shards that can reside on one node in the system. -2. [Increase](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) the value for the total number of shards that can be assigned on one node to a higher value: +2. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: ```console PUT _cluster/settings diff --git a/troubleshoot/elasticsearch/increase-shard-limit.md b/troubleshoot/elasticsearch/increase-shard-limit.md index f726b8d2b..d7358b8df 100644 --- a/troubleshoot/elasticsearch/increase-shard-limit.md +++ b/troubleshoot/elasticsearch/increase-shard-limit.md @@ -17,7 +17,7 @@ In order to fix this follow the next steps: :::::::{tab-set} ::::::{tab-item} Elasticsearch Service -In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node. We’ll achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) and increasing the configured value for the indices that have shards unassigned. +In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node. We’ll achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned. **Use {{kib}}** @@ -35,7 +35,7 @@ In order to get the shards assigned we’ll need to increase the number of shard :class: screenshot ::: -4. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) for the index with unassigned shards: +4. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: ```console GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings @@ -55,7 +55,7 @@ In order to get the shards assigned we’ll need to increase the number of shard 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. -5. [Increase](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) the value for the total number of shards that can be assigned on one node to a higher value: +5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value: ```console PUT /my-index-000001/_settings @@ -72,7 +72,7 @@ In order to get the shards assigned we’ll need to increase the number of shard ::::::{tab-item} Self-managed In order to get the shards assigned you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. -To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings @@ -93,9 +93,9 @@ The response will look like this: 1. Represents a comma separated list of data tier node roles this index is allowed to be allocated on, the first one in the list being the one with the higher priority i.e. the tier the index is targeting. e.g. in this example the tier preference is `data_warm,data_hot` so the index is targeting the `warm` tier and more nodes with the `data_warm` role are needed in the {{es}} cluster. -Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) and increasing the configured value will allow more shards to be assigned on the same node. +Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value will allow more shards to be assigned on the same node. -1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) for the index with unassigned shards: +1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards: ```console GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings @@ -115,7 +115,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec 1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index. -2. [Increase](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) the total number of shards that can be assigned on one node or reset the value to unbounded (`-1`): +2. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the total number of shards that can be assigned on one node or reset the value to unbounded (`-1`): ```console PUT /my-index-000001/_settings diff --git a/troubleshoot/elasticsearch/increase-tier-capacity.md b/troubleshoot/elasticsearch/increase-tier-capacity.md index f66ab7f8d..64604f334 100644 --- a/troubleshoot/elasticsearch/increase-tier-capacity.md +++ b/troubleshoot/elasticsearch/increase-tier-capacity.md @@ -34,7 +34,7 @@ One way to get the replica shards assigned is to add an availability zone. This ::: -To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings @@ -110,9 +110,9 @@ If it is not possible to increase the size per zone or the number of availabilit himrst ``` - You can count the rows containing the letter representing the target tier to know how many nodes you have. See [{{api-query-parms-title}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html#cat-nodes-api-query-params) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. + You can count the rows containing the letter representing the target tier to know how many nodes you have. See [{{api-query-parms-title}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. -4. [Decrease](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +4. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. ```console PUT /my-index-000001/_settings @@ -129,7 +129,7 @@ If it is not possible to increase the size per zone or the number of availabilit ::::::{tab-item} Self-managed In order to get the replica shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes. -To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: +To inspect which tier an index is targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting: ```console GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings @@ -188,9 +188,9 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec himrst ``` - You can count the rows containing the letter representing the target tier to know how many nodes you have. See [{{api-query-parms-title}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html#cat-nodes-api-query-params) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. + You can count the rows containing the letter representing the target tier to know how many nodes you have. See [{{api-query-parms-title}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) for details. The example above has two rows containing `h`, so there are two nodes in the hot tier. -3. [Decrease](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. +3. [Decrease](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for [high availability](../../deploy-manage/production-guidance/availability-and-resilience.md), the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for `index.number_of_replicas` is 1. ```console PUT /my-index-000001/_settings diff --git a/troubleshoot/elasticsearch/mapping-explosion.md b/troubleshoot/elasticsearch/mapping-explosion.md index 39508902d..7f583ce48 100644 --- a/troubleshoot/elasticsearch/mapping-explosion.md +++ b/troubleshoot/elasticsearch/mapping-explosion.md @@ -9,9 +9,9 @@ mapped_pages: Mapping explosion may surface as the following performance symptoms: -* [CAT nodes](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html) reporting high heap or CPU on the main node and/or nodes hosting the indices shards. This may potentially escalate to temporary node unresponsiveness and/or main overwhelm. -* [CAT tasks](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-tasks.html) reporting long search durations only related to this index or indices, even on simple searches. -* [CAT tasks](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-tasks.html) reporting long index durations only related to this index or indices. This usually relates to [pending tasks](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-pending.html) reporting that the coordinating node is waiting for all other nodes to confirm they are on mapping update request. +* [CAT nodes](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes) reporting high heap or CPU on the main node and/or nodes hosting the indices shards. This may potentially escalate to temporary node unresponsiveness and/or main overwhelm. +* [CAT tasks](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-tasks) reporting long search durations only related to this index or indices, even on simple searches. +* [CAT tasks](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-tasks) reporting long index durations only related to this index or indices. This usually relates to [pending tasks](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-pending-tasks) reporting that the coordinating node is waiting for all other nodes to confirm they are on mapping update request. * Discover’s **Fields for wildcard** page-loading API command or [Dev Tools](../../explore-analyze/query-filter/tools/console.md) page-refreshing Autocomplete API commands are taking a long time (more than 10 seconds) or timing out in the browser’s Developer Tools Network tab. For more information, refer to our [walkthrough on troubleshooting Discover](https://www.elastic.co/blog/troubleshooting-guide-common-issues-kibana-discover-load). * Discover’s **Available fields** taking a long time to compile Javascript in the browser’s Developer Tools Performance tab. This may potentially escalate to temporary browser page unresponsiveness. * Kibana’s [alerting](../../explore-analyze/alerts-cases/alerts.md) or [security rules](../../solutions/security/detect-and-alert.md) may error `The content length (X) is bigger than the maximum allowed string (Y)` where `X` is attempted payload and `Y` is {{kib}}'s [`server-maxPayload`](../../deploy-manage/deploy/self-managed/configure.md#server-maxPayload). @@ -36,7 +36,7 @@ Modifying to the [nested](https://www.elastic.co/guide/en/elasticsearch/referenc To confirm the field totals of an index to check for mapping explosion: * Check {{es}} cluster logs for errors `Limit of total fields [X] in index [Y] has been exceeded` where `X` is the value of `index.mapping.total_fields.limit` and `Y` is your index. The correlated ingesting source log error would be `Limit of total fields [X] has been exceeded while adding new fields [Z]` where `Z` is attempted new fields. -* For top-level fields, poll [field capabilities](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-field-caps.html) for `fields=*`. +* For top-level fields, poll [field capabilities](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-field-caps) for `fields=*`. * Search the output of [get mapping](../../manage-data/data-store/mapping.md) for `"type"`. * If you’re inclined to use the [third-party tool JQ](https://stedolan.github.io/jq), you can process the [get mapping](../../manage-data/data-store/mapping.md) `mapping.json` output. @@ -45,12 +45,12 @@ To confirm the field totals of an index to check for mapping explosion: ``` -You can use [analyze index disk usage](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-disk-usage.html) to find fields which are never or rarely populated as easy wins. +You can use [analyze index disk usage](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-disk-usage) to find fields which are never or rarely populated as easy wins. ## Complex explosions [complex] -Mapping explosions also covers when an individual index field totals are within limits but combined indices fields totals are very high. It’s very common for symptoms to first be noticed on a [data view](../../explore-analyze/find-and-organize/data-views.md) and be traced back to an individual index or a subset of indices via the [resolve index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-resolve-index-api.html). +Mapping explosions also covers when an individual index field totals are within limits but combined indices fields totals are very high. It’s very common for symptoms to first be noticed on a [data view](../../explore-analyze/find-and-organize/data-views.md) and be traced back to an individual index or a subset of indices via the [resolve index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-resolve-index). However, though less common, it is possible to only experience mapping explosions on the combination of backing indices. For example, if a [data stream](../../manage-data/data-store/index-types/data-streams.md)'s backing indices are all at field total limit but each contain unique fields from one another. @@ -64,8 +64,8 @@ If your issue only surfaces via a [data view](../../explore-analyze/find-and-org Mapping explosion is not easily resolved, so it is better prevented via the above. Encountering it usually indicates unexpected upstream data changes or planning failures. If encountered, we recommend reviewing your data architecture. The following options are additional to the ones discussed earlier on this page; they should be applied as best use-case applicable: * Disable [dynamic mappings](../../manage-data/data-store/mapping.md). -* [Reindex](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) into an index with a corrected mapping, either via [index template](../../manage-data/data-store/templates.md) or [explicitly set](../../manage-data/data-store/mapping.md). -* If index is unneeded and/or historical, consider [deleting](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html). +* [Reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) into an index with a corrected mapping, either via [index template](../../manage-data/data-store/templates.md) or [explicitly set](../../manage-data/data-store/mapping.md). +* If index is unneeded and/or historical, consider [deleting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete). * [Export](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html) and [re-import](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) data into a mapping-corrected index after [pruning](https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html) problematic fields via Logstash. -[Splitting index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-split-index.html) would not resolve the core issue. +[Splitting index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-split) would not resolve the core issue. diff --git a/troubleshoot/elasticsearch/monitoring-troubleshooting.md b/troubleshoot/elasticsearch/monitoring-troubleshooting.md index 0c4cbf563..85a1468b6 100644 --- a/troubleshoot/elasticsearch/monitoring-troubleshooting.md +++ b/troubleshoot/elasticsearch/monitoring-troubleshooting.md @@ -15,7 +15,7 @@ For issues that you cannot fix yourself … we’re here to help. If you are an **Symptoms**: There is no information about your cluster on the **Stack Monitoring** page in {{kib}}. -**Resolution**: Check whether the appropriate indices exist on the monitoring cluster. For example, use the [cat indices](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html) command to verify that there is a `.monitoring-kibana*` index for your {{kib}} monitoring data and a `.monitoring-es*` index for your {{es}} monitoring data. If you are collecting monitoring data by using {{metricbeat}} the indices have `-mb` in their names. If the indices do not exist, review your configuration. For example, see [*Monitoring in a production environment*](../../deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md). +**Resolution**: Check whether the appropriate indices exist on the monitoring cluster. For example, use the [cat indices](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices) command to verify that there is a `.monitoring-kibana*` index for your {{kib}} monitoring data and a `.monitoring-es*` index for your {{es}} monitoring data. If you are collecting monitoring data by using {{metricbeat}} the indices have `-mb` in their names. If the indices do not exist, review your configuration. For example, see [*Monitoring in a production environment*](../../deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md). ## Monitoring data for some {{stack}} nodes or instances is missing from {{kib}} [monitoring-troubleshooting-uuid] diff --git a/troubleshoot/elasticsearch/red-yellow-cluster-status.md b/troubleshoot/elasticsearch/red-yellow-cluster-status.md index 629bf256b..8a3e05782 100644 --- a/troubleshoot/elasticsearch/red-yellow-cluster-status.md +++ b/troubleshoot/elasticsearch/red-yellow-cluster-status.md @@ -26,7 +26,7 @@ If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your **Check your cluster status** -Use the [cluster health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html). +Use the [cluster health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health). ```console GET _cluster/health?filter_path=status,*_shards @@ -36,7 +36,7 @@ A healthy cluster has a green `status` and zero `unassigned_shards`. A yellow st **View unassigned shards** -To view unassigned shards, use the [cat shards API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-shards.html). +To view unassigned shards, use the [cat shards API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-shards). ```console GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state @@ -44,7 +44,7 @@ GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for primary shards and `r` for replicas. -To understand why an unassigned shard is not being assigned and what action you must take to allow {{es}} to assign it, use the [cluster allocation explanation API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html). +To understand why an unassigned shard is not being assigned and what action you must take to allow {{es}} to assign it, use the [cluster allocation explanation API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-allocation-explain). ```console GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.* @@ -78,9 +78,9 @@ Shards often become unassigned when a data node leaves the cluster. This can occ After you resolve the issue and recover the node, it will rejoin the cluster. {{es}} will then automatically allocate any unassigned shards. -You can monitor this process by [checking your cluster health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html). The number of unallocated shards should progressively decrease until green status is reached. +You can monitor this process by [checking your cluster health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health). The number of unallocated shards should progressively decrease until green status is reached. -To avoid wasting resources on temporary issues, {{es}} [delays allocation](../../deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md) by one minute by default. If you’ve recovered a node and don’t want to wait for the delay period, you can call the [cluster reroute API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html) with no arguments to start the allocation process. The process runs asynchronously in the background. +To avoid wasting resources on temporary issues, {{es}} [delays allocation](../../deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/delaying-allocation-when-node-leaves.md) by one minute by default. If you’ve recovered a node and don’t want to wait for the delay period, you can call the [cluster reroute API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-reroute) with no arguments to start the allocation process. The process runs asynchronously in the background. ```console POST _cluster/reroute @@ -95,7 +95,7 @@ Misconfigured allocation settings can result in an unassigned primary shard. The * [Allocation filtering](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#cluster-shard-allocation-filtering) cluster settings * [Allocation awareness](../../deploy-manage/distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md) cluster settings -To review your allocation settings, use the [get index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) and [cluster get settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-get-settings.html) APIs. +To review your allocation settings, use the [get index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and [cluster get settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) APIs. ```console GET my-index/_settings?flat_settings=true&include_defaults=true @@ -103,7 +103,7 @@ GET my-index/_settings?flat_settings=true&include_defaults=true GET _cluster/settings?flat_settings=true&include_defaults=true ``` -You can change the settings using the [update index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) and [cluster update settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) APIs. +You can change the settings using the [update index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) and [cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) APIs. ### Allocate or reduce replicas [fix-cluster-status-allocation-replicas] @@ -125,7 +125,7 @@ PUT _settings {{es}} uses a [low disk watermark](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-cluster.html#disk-based-shard-allocation) to ensure data nodes have enough disk space for incoming shards. By default, {{es}} does not allocate shards to nodes using more than 85% of disk space. -To check the current disk space of your nodes, use the [cat allocation API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-allocation.html). +To check the current disk space of your nodes, use the [cat allocation API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-allocation). ```console GET _cat/allocation?v=true&h=node,shards,disk.* @@ -136,13 +136,13 @@ If your nodes are running low on disk space, you have a few options: * Upgrade your nodes to increase disk space. * Add more nodes to the cluster. * Delete unneeded indices to free up space. If you use {{ilm-init}}, you can update your lifecycle policy to use [searchable snapshots](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-searchable-snapshot.html) or add a delete phase. If you no longer need to search the data, you can use a [snapshot](../../deploy-manage/tools/snapshot-and-restore.md) to store it off-cluster. -* If you no longer write to an index, use the [force merge API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html) or {{ilm-init}}'s [force merge action](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-forcemerge.html) to merge its segments into larger ones. +* If you no longer write to an index, use the [force merge API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge) or {{ilm-init}}'s [force merge action](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-forcemerge.html) to merge its segments into larger ones. ```console POST my-index/_forcemerge ``` -* If an index is read-only, use the [shrink index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-shrink-index.html) or {{ilm-init}}'s [shrink action](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-shrink.html) to reduce its primary shard count. +* If an index is read-only, use the [shrink index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-shrink) or {{ilm-init}}'s [shrink action](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-shrink.html) to reduce its primary shard count. ```console POST my-index/_shrink/my-shrunken-index @@ -191,12 +191,12 @@ Shard allocation requires JVM heap memory. High JVM memory pressure can trigger ### Recover data for a lost primary shard [fix-cluster-status-restore] -If a node containing a primary shard is lost, {{es}} can typically replace it using a replica on another node. If you can’t recover the node and replicas don’t exist or are irrecoverable, [Allocation Explain](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-allocation-explain.html) will report `no_valid_shard_copy` and you’ll need to do one of the following: +If a node containing a primary shard is lost, {{es}} can typically replace it using a replica on another node. If you can’t recover the node and replicas don’t exist or are irrecoverable, [Allocation Explain](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-allocation-explain) will report `no_valid_shard_copy` and you’ll need to do one of the following: * restore the missing data from [snapshot](../../deploy-manage/tools/snapshot-and-restore.md) * index the missing data from its original data source -* accept data loss on the index-level by running [Delete Index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html) -* accept data loss on the shard-level by executing [Cluster Reroute](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html) allocate_stale_primary or allocate_empty_primary command with `accept_data_loss: true` +* accept data loss on the index-level by running [Delete Index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete) +* accept data loss on the shard-level by executing [Cluster Reroute](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-reroute) allocate_stale_primary or allocate_empty_primary command with `accept_data_loss: true` ::::{warning} Only use this option if node recovery is no longer possible. This process allocates an empty primary shard. If the node later rejoins the cluster, {{es}} will overwrite its primary shard with data from this newer empty shard, resulting in data loss. diff --git a/troubleshoot/elasticsearch/rejected-requests.md b/troubleshoot/elasticsearch/rejected-requests.md index 86ed13440..60d0586e5 100644 --- a/troubleshoot/elasticsearch/rejected-requests.md +++ b/troubleshoot/elasticsearch/rejected-requests.md @@ -20,7 +20,7 @@ If you’re using Elastic Cloud Hosted, then you can use AutoOps to monitor your ## Check rejected tasks [check-rejected-tasks] -To check the number of rejected tasks for each thread pool, use the [cat thread pool API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html). A high ratio of `rejected` to `completed` tasks, particularly in the `search` and `write` thread pools, means {{es}} regularly rejects requests. +To check the number of rejected tasks for each thread pool, use the [cat thread pool API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-thread-pool). A high ratio of `rejected` to `completed` tasks, particularly in the `search` and `write` thread pools, means {{es}} regularly rejects requests. ```console GET /_cat/thread_pool?v=true&h=id,name,queue,active,rejected,completed @@ -35,7 +35,7 @@ See [this video](https://www.youtube.com/watch?v=auZJRXoAVpI) for a walkthrough ## Check circuit breakers [check-circuit-breakers] -To check the number of tripped [circuit breakers](https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html), use the [node stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html). +To check the number of tripped [circuit breakers](https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html), use the [node stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats). ```console GET /_nodes/stats/breaker @@ -48,7 +48,7 @@ See [this video](https://www.youtube.com/watch?v=k3wYlRVbMSw) for a walkthrough ## Check indexing pressure [check-indexing-pressure] -To check the number of [indexing pressure](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-indexing-pressure.html) rejections, use the [node stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html). +To check the number of [indexing pressure](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-indexing-pressure.html) rejections, use the [node stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats). ```console GET _nodes/stats?human&filter_path=nodes.*.indexing_pressure @@ -58,7 +58,7 @@ These stats are cumulative from node startup. Indexing pressure rejections appear as an `EsRejectedExecutionException`, and indicate that they were rejected due to `combined_coordinating_and_primary`, `coordinating`, `primary`, or `replica`. -These errors are often related to [backlogged tasks](task-queue-backlog.md), [bulk index](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) sizing, or the ingest target’s [`refresh_interval` setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html). +These errors are often related to [backlogged tasks](task-queue-backlog.md), [bulk index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) sizing, or the ingest target’s [`refresh_interval` setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html). See [this video](https://www.youtube.com/watch?v=QuV8QqSfc0c) for a walkthrough of diagnosing indexing pressure rejections. diff --git a/troubleshoot/elasticsearch/repeated-snapshot-failures.md b/troubleshoot/elasticsearch/repeated-snapshot-failures.md index 7e9df29a0..a7127c14c 100644 --- a/troubleshoot/elasticsearch/repeated-snapshot-failures.md +++ b/troubleshoot/elasticsearch/repeated-snapshot-failures.md @@ -15,7 +15,7 @@ In the event that an automated {{slm}} policy execution is experiencing repeated :::::::{tab-set} ::::::{tab-item} Elasticsearch Service -In order to check the status of failing {{slm}} policies we need to go to Kibana and retrieve the [Snapshot Lifecycle Policy information](https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get-policy.html). +In order to check the status of failing {{slm}} policies we need to go to Kibana and retrieve the [Snapshot Lifecycle Policy information](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-get-lifecycle). **Use {{kib}}** @@ -33,7 +33,7 @@ In order to check the status of failing {{slm}} policies we need to go to Kibana :class: screenshot ::: -4. [Retrieve](https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get-policy.html) the {{slm}} policy: +4. [Retrieve](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-get-lifecycle) the {{slm}} policy: ```console GET _slm/policy/ @@ -100,7 +100,7 @@ In the event that snapshots are failing for other reasons check the logs on the :::::: ::::::{tab-item} Self-managed -[Retrieve](https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-get-policy.html) the {{slm}} policy: +[Retrieve](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-get-lifecycle) the {{slm}} policy: ```console GET _slm/policy/ diff --git a/troubleshoot/elasticsearch/restore-from-snapshot.md b/troubleshoot/elasticsearch/restore-from-snapshot.md index dba00aee2..6fc9db4d6 100644 --- a/troubleshoot/elasticsearch/restore-from-snapshot.md +++ b/troubleshoot/elasticsearch/restore-from-snapshot.md @@ -33,7 +33,7 @@ In order to restore the indices and data streams that are missing data: :class: screenshot ::: -4. To view the affected indices using the [cat indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html). +4. To view the affected indices using the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). ```console GET _cat/indices?v&health=red&h=index,status,health @@ -49,7 +49,7 @@ In order to restore the indices and data streams that are missing data: The `red` health of the indices above indicates that these indices are missing primary shards, meaning they are missing data. -5. In order to restore the data we need to find a snapshot that contains these two indices. To find such a snapshot use the [get snapshot API](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-snapshot-api.html). +5. In order to restore the data we need to find a snapshot that contains these two indices. To find such a snapshot use the [get snapshot API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-get). ```console GET _snapshot/my_repository/*?verbose=false @@ -150,13 +150,13 @@ In order to restore the indices and data streams that are missing data: POST my-data-stream/_rollover ``` -8. Now that the data stream preparation is done, we will close the target indices by using the [close indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-close.html). +8. Now that the data stream preparation is done, we will close the target indices by using the [close indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-close). ```console POST kibana_sample_data_flights,.ds-my-data-stream-2022.06.17-000001/_close ``` - You can confirm that they are closed with the [cat indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html). + You can confirm that they are closed with the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). ```console GET _cat/indices?v&health=red&h=index,status,health @@ -170,7 +170,7 @@ In order to restore the indices and data streams that are missing data: kibana_sample_data_flights close red ``` -9. The indices are closed, now we can restore them from snapshots without causing any complications using the [restore snapshot API](https://www.elastic.co/guide/en/elasticsearch/reference/current/restore-snapshot-api.html): +9. The indices are closed, now we can restore them from snapshots without causing any complications using the [restore snapshot API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-restore): ```console POST _snapshot/my_repository/snapshot-20200617/_restore @@ -185,7 +185,7 @@ In order to restore the indices and data streams that are missing data: ::::{note} - If any [feature states](../../deploy-manage/tools/snapshot-and-restore.md#feature-state) need to be restored we’ll need to specify them using the `feature_states` field and the indices that belong to the feature states we restore must not be specified under `indices`. The [Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) returns both the `indices` and `feature_states` that need to be restored for the restore from snapshot diagnosis. e.g.: + If any [feature states](../../deploy-manage/tools/snapshot-and-restore.md#feature-state) need to be restored we’ll need to specify them using the `feature_states` field and the indices that belong to the feature states we restore must not be specified under `indices`. The [Health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) returns both the `indices` and `feature_states` that need to be restored for the restore from snapshot diagnosis. e.g.: :::: @@ -198,7 +198,7 @@ In order to restore the indices and data streams that are missing data: } ``` -10. Finally we can verify that the indices health is now `green` via the [cat indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html). +10. Finally we can verify that the indices health is now `green` via the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). ```console GET _cat/indices?v&index=.ds-my-data-stream-2022.06.17-000001,kibana_sample_data_flightsh=index,status,health @@ -221,7 +221,7 @@ For more guidance on creating and restoring snapshots see [this guide](../../dep ::::::{tab-item} Self-managed In order to restore the indices that are missing shards: -1. View the affected indices using the [cat indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html). +1. View the affected indices using the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). ```console GET _cat/indices?v&health=red&h=index,status,health @@ -237,7 +237,7 @@ In order to restore the indices that are missing shards: The `red` health of the indices above indicates that these indices are missing primary shards, meaning they are missing data. -2. In order to restore the data we need to find a snapshot that contains these two indices. To find such a snapshot use the [get snapshot API](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-snapshot-api.html). +2. In order to restore the data we need to find a snapshot that contains these two indices. To find such a snapshot use the [get snapshot API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-get). ```console GET _snapshot/my_repository/*?verbose=false @@ -338,13 +338,13 @@ In order to restore the indices that are missing shards: POST my-data-stream/_rollover ``` -5. Now that the data stream preparation is done, we will close the target indices by using the [close indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-close.html). +5. Now that the data stream preparation is done, we will close the target indices by using the [close indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-close). ```console POST kibana_sample_data_flights,.ds-my-data-stream-2022.06.17-000001/_close ``` - You can confirm that they are closed with the [cat indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html). + You can confirm that they are closed with the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). ```console GET _cat/indices?v&health=red&h=index,status,health @@ -358,7 +358,7 @@ In order to restore the indices that are missing shards: kibana_sample_data_flights close red ``` -6. The indices are closed, now we can restore them from snapshots without causing any complications using the [restore snapshot API](https://www.elastic.co/guide/en/elasticsearch/reference/current/restore-snapshot-api.html): +6. The indices are closed, now we can restore them from snapshots without causing any complications using the [restore snapshot API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-restore): ```console POST _snapshot/my_repository/snapshot-20200617/_restore @@ -373,7 +373,7 @@ In order to restore the indices that are missing shards: ::::{note} - If any [feature states](../../deploy-manage/tools/snapshot-and-restore.md#feature-state) need to be restored we’ll need to specify them using the `feature_states` field and the indices that belong to the feature states we restore must not be specified under `indices`. The [Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) returns both the `indices` and `feature_states` that need to be restored for the restore from snapshot diagnosis. e.g.: + If any [feature states](../../deploy-manage/tools/snapshot-and-restore.md#feature-state) need to be restored we’ll need to specify them using the `feature_states` field and the indices that belong to the feature states we restore must not be specified under `indices`. The [Health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) returns both the `indices` and `feature_states` that need to be restored for the restore from snapshot diagnosis. e.g.: :::: @@ -386,7 +386,7 @@ In order to restore the indices that are missing shards: } ``` -7. Finally we can verify that the indices health is now `green` via the [cat indices API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html). +7. Finally we can verify that the indices health is now `green` via the [cat indices API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-indices). ```console GET _cat/indices?v&index=.ds-my-data-stream-2022.06.17-000001,kibana_sample_data_flightsh=index,status,health diff --git a/troubleshoot/elasticsearch/security/security-trb-settings.md b/troubleshoot/elasticsearch/security/security-trb-settings.md index 202b51281..f4deb187a 100644 --- a/troubleshoot/elasticsearch/security/security-trb-settings.md +++ b/troubleshoot/elasticsearch/security/security-trb-settings.md @@ -8,7 +8,7 @@ mapped_pages: **Symptoms:** -* When you use the [nodes info API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-info.html) to retrieve settings for a node, some information is missing. +* When you use the [nodes info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-info) to retrieve settings for a node, some information is missing. **Resolution:** diff --git a/troubleshoot/elasticsearch/start-ilm.md b/troubleshoot/elasticsearch/start-ilm.md index 8d7e05da1..2416b90db 100644 --- a/troubleshoot/elasticsearch/start-ilm.md +++ b/troubleshoot/elasticsearch/start-ilm.md @@ -16,7 +16,7 @@ In order to start the automatic {{ilm}} service, follow these steps: :::::::{tab-set} ::::::{tab-item} Elasticsearch Service -In order to start {{ilm}} we need to go to Kibana and execute the [start command](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html). +In order to start {{ilm}} we need to go to Kibana and execute the [start command](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start). **Use {{kib}}** @@ -34,7 +34,7 @@ In order to start {{ilm}} we need to go to Kibana and execute the [start command :class: screenshot ::: -4. [Start](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html) {{ilm}}: +4. [Start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) {{ilm}}: ```console POST _ilm/start @@ -64,7 +64,7 @@ In order to start {{ilm}} we need to go to Kibana and execute the [start command :::::: ::::::{tab-item} Self-managed -[Start](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html) {{ilm}}: +[Start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) {{ilm}}: ```console POST _ilm/start diff --git a/troubleshoot/elasticsearch/start-slm.md b/troubleshoot/elasticsearch/start-slm.md index f2810c25d..c4a527bc0 100644 --- a/troubleshoot/elasticsearch/start-slm.md +++ b/troubleshoot/elasticsearch/start-slm.md @@ -16,7 +16,7 @@ In order to start the snapshot lifecycle management service, follow these steps: :::::::{tab-set} ::::::{tab-item} Elasticsearch Service -In order to start {{slm}} we need to go to Kibana and execute the [start command](https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-start.html). +In order to start {{slm}} we need to go to Kibana and execute the [start command](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-start). **Use {{kib}}** @@ -34,7 +34,7 @@ In order to start {{slm}} we need to go to Kibana and execute the [start command :class: screenshot ::: -4. [Start](https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-start.html) {{slm}}: +4. [Start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-start) {{slm}}: ```console POST _slm/start @@ -64,7 +64,7 @@ In order to start {{slm}} we need to go to Kibana and execute the [start command :::::: ::::::{tab-item} Self-managed -[Start](https://www.elastic.co/guide/en/elasticsearch/reference/current/slm-api-start.html) {{slm}}: +[Start](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-slm-start) {{slm}}: ```console POST _slm/start diff --git a/troubleshoot/elasticsearch/task-queue-backlog.md b/troubleshoot/elasticsearch/task-queue-backlog.md index 86c907f93..c602e0508 100644 --- a/troubleshoot/elasticsearch/task-queue-backlog.md +++ b/troubleshoot/elasticsearch/task-queue-backlog.md @@ -28,7 +28,7 @@ To identify the cause of the backlog, try these diagnostic actions. A [depleted thread pool](high-cpu-usage.md) can result in [rejected requests](rejected-requests.md). -Use the [cat thread pool API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-thread-pool.html) to monitor active threads, queued tasks, rejections, and completed tasks: +Use the [cat thread pool API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-thread-pool) to monitor active threads, queued tasks, rejections, and completed tasks: ```console GET /_cat/thread_pool?v&s=t,n&h=type,name,node_name,active,queue,rejected,completed @@ -41,26 +41,26 @@ GET /_cat/thread_pool?v&s=t,n&h=type,name,node_name,active,queue,rejected,comple ### Inspect hot threads on each node [diagnose-task-queue-hot-thread] -If a particular thread pool queue is backed up, periodically poll the [nodes hot threads API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) to gauge the thread’s progression and ensure it has sufficient resources: +If a particular thread pool queue is backed up, periodically poll the [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) to gauge the thread’s progression and ensure it has sufficient resources: ```console GET /_nodes/hot_threads ``` -Although the hot threads API response does not list the specific tasks running on a thread, it provides a summary of the thread’s activities. You can correlate a hot threads response with a [task management API response](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html) to identify any overlap with specific tasks. For example, if the hot threads response indicates the thread is `performing a search query`, you can [check for long-running search tasks](#diagnose-task-queue-long-running-node-tasks) using the task management API. +Although the hot threads API response does not list the specific tasks running on a thread, it provides a summary of the thread’s activities. You can correlate a hot threads response with a [task management API response](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks) to identify any overlap with specific tasks. For example, if the hot threads response indicates the thread is `performing a search query`, you can [check for long-running search tasks](#diagnose-task-queue-long-running-node-tasks) using the task management API. ### Identify long-running node tasks [diagnose-task-queue-long-running-node-tasks] -Long-running tasks can also cause a backlog. Use the [task management API](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html) to check for excessive `running_time_in_nanos` values: +Long-running tasks can also cause a backlog. Use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks) to check for excessive `running_time_in_nanos` values: ```console GET /_tasks?pretty=true&human=true&detailed=true ``` -You can filter on a specific `action`, such as [bulk indexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) or search-related tasks. These tend to be long-running. +You can filter on a specific `action`, such as [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) or search-related tasks. These tend to be long-running. -* Filter on [bulk index](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) actions: +* Filter on [bulk index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) actions: ```console GET /_tasks?human&detailed&actions=indices:data/write/bulk @@ -78,7 +78,7 @@ Long-running tasks might need to be [canceled](#resolve-task-queue-backlog-stuck ### Look for long-running cluster tasks [diagnose-task-queue-long-running-cluster-tasks] -Use the [cluster pending tasks API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-pending.html) to identify delays in cluster state synchronization: +Use the [cluster pending tasks API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-pending-tasks) to identify delays in cluster state synchronization: ```console GET /_cluster/pending_tasks @@ -101,7 +101,7 @@ In some cases, you might need to increase the thread pool size. For example, the ### Cancel stuck tasks [resolve-task-queue-backlog-stuck-tasks] -If an active task’s [hot thread](#diagnose-task-queue-hot-thread) shows no progress, consider [canceling the task](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html#task-cancellation). +If an active task’s [hot thread](#diagnose-task-queue-hot-thread) shows no progress, consider [canceling the task](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks#task-cancellation). ### Address hot spotting [resolve-task-queue-backlog-hotspotting] diff --git a/troubleshoot/elasticsearch/transform-troubleshooting.md b/troubleshoot/elasticsearch/transform-troubleshooting.md index 1109291e7..254bfdc85 100644 --- a/troubleshoot/elasticsearch/transform-troubleshooting.md +++ b/troubleshoot/elasticsearch/transform-troubleshooting.md @@ -13,7 +13,7 @@ For issues that you cannot fix yourself … we’re here to help. If you are an If you encounter problems with your {{transforms}}, you can gather more information from the following files and APIs: * Lightweight audit messages are stored in `.transform-notifications-read`. Search by your `transform_id`. -* The [get {{transform}} statistics API](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-transform-stats.html) provides information about the {{transform}} status and failures. -* If the {{transform}} exists as a task, you can use the [task management API](https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html) to gather task information. For example: `GET _tasks?actions=data_frame/transforms*&detailed`. Typically, the task exists when the {{transform}} is in a started or failed state. +* The [get {{transform}} statistics API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-transform-get-transform-stats) provides information about the {{transform}} status and failures. +* If the {{transform}} exists as a task, you can use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks) to gather task information. For example: `GET _tasks?actions=data_frame/transforms*&detailed`. Typically, the task exists when the {{transform}} is in a started or failed state. * The {{es}} logs from the node that was running the {{transform}} might also contain useful information. You can identify the node from the notification messages. Alternatively, if the task still exists, you can get that information from the get {{transform}} statistics API. For more information, see [*Elasticsearch application logging*](../../deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md). diff --git a/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md b/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md index 3db96a97f..33c326467 100644 --- a/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md +++ b/troubleshoot/elasticsearch/troubleshoot-migrate-to-tiers.md @@ -35,7 +35,7 @@ In order to get the shards assigned we need to call the [migrate to data tiers r :class: screenshot ::: -4. First, let’s [stop](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-stop.html) {ilm} +4. First, let’s [stop](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop) {ilm} ```console POST /_ilm/stop @@ -88,7 +88,7 @@ In order to get the shards assigned we need to call the [migrate to data tiers r 4. The composable index templates that were updated to not contain custom routing settings for the provided data attribute. 5. The component templates that were updated to not contain custom routing settings for the provided data attribute. -7. [Restart](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html) {ilm} +7. [Restart](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) {ilm} ```console POST /_ilm/start @@ -112,7 +112,7 @@ In order to get the shards assigned we need to make sure the deployment is using node.roles [ data_hot, data_content ] ``` -2. [Stop](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-stop.html) {ilm} +2. [Stop](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop) {ilm} ```console POST /_ilm/stop @@ -165,7 +165,7 @@ In order to get the shards assigned we need to make sure the deployment is using 4. The composable index templates that were updated to not contain custom routing settings for the provided data attribute. 5. The component templates that were updated to not contain custom routing settings for the provided data attribute. -5. [Restart](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html) {ilm} +5. [Restart](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) {ilm} ```console POST /_ilm/start diff --git a/troubleshoot/elasticsearch/troubleshooting-searches.md b/troubleshoot/elasticsearch/troubleshooting-searches.md index 04f84487d..f6515436c 100644 --- a/troubleshoot/elasticsearch/troubleshooting-searches.md +++ b/troubleshoot/elasticsearch/troubleshooting-searches.md @@ -13,19 +13,19 @@ When you query your data, Elasticsearch may return an error, no search results, Elasticsearch returns an `index_not_found_exception` when the data stream, index or alias you try to query does not exist. This can happen when you misspell the name or when the data has been indexed to a different data stream or index. -Use the [exists API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-exists.html) to check whether a data stream, index, or alias exists: +Use the [exists API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-exists) to check whether a data stream, index, or alias exists: ```console HEAD my-data-stream ``` -Use the [data stream stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-stats-api.html) to list all data streams: +Use the [data stream stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-data-streams-stats-1) to list all data streams: ```console GET /_data_stream/_stats?human=true ``` -Use the [get index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-index.html) to list all indices and their aliases: +Use the [get index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get) to list all indices and their aliases: ```console GET _all?filter_path=*.aliases @@ -42,7 +42,7 @@ GET /my-alias/_search?ignore_unavailable=true When a search request returns no hits, the data stream or index may contain no data. This can happen when there is a data ingestion issue. For example, the data may have been indexed to a data stream or index with another name. -Use the [count API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-count.html) to retrieve the number of documents in a data stream or index. Check that `count` in the response is not 0. +Use the [count API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-count) to retrieve the number of documents in a data stream or index. Check that `count` in the response is not 0. ```console GET /my-index-000001/_count @@ -56,7 +56,7 @@ When getting no search results in {{kib}}, check that you have selected the corr ## Check that the field exists and its capabilities [troubleshooting-searches-field-exists-caps] -Querying a field that does not exist will not return any results. Use the [field capabilities API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-field-caps.html) to check whether a field exists: +Querying a field that does not exist will not return any results. Use the [field capabilities API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-field-caps) to check whether a field exists: ```console GET /my-index-000001/_field_caps?fields=my-field @@ -98,7 +98,7 @@ A field’s capabilities are determined by its [mapping](../../manage-data/data- GET /my-index-000001/_mappings ``` -If you query a `text` field, pay attention to the analyzer that may have been configured. You can use the [analyze API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-analyze.html) to check how a field’s analyzer processes values and query terms: +If you query a `text` field, pay attention to the analyzer that may have been configured. You can use the [analyze API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-analyze) to check how a field’s analyzer processes values and query terms: ```console GET /my-index-000001/_analyze @@ -174,7 +174,7 @@ GET my-index-000001/_search?sort=@timestamp:desc&size=1 When a query returns unexpected results, Elasticsearch offers several tools to investigate why. -The [validate API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-validate.html) enables you to validate a query. Use the `rewrite` parameter to return the Lucene query an Elasticsearch query is rewritten into: +The [validate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-validate-query) enables you to validate a query. Use the `rewrite` parameter to return the Lucene query an Elasticsearch query is rewritten into: ```console GET /my-index-000001/_validate/query?rewrite=true @@ -190,7 +190,7 @@ GET /my-index-000001/_validate/query?rewrite=true } ``` -Use the [explain API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-explain.html) to find out why a specific document matches or doesn’t match a query: +Use the [explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-explain) to find out why a specific document matches or doesn’t match a query: ```console GET /my-index-000001/_explain/0 @@ -211,13 +211,13 @@ To troubleshoot queries in {{kib}}, select **Inspect** in the toolbar. Next, sel ## Check index settings [troubleshooting-searches-settings] -[Index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-modules-settings) can influence search results. For example, the `index.query.default_field` setting, which determines the field that is queried when a query specifies no explicit field. Use the [get index settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) to retrieve the settings for an index: +[Index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-modules-settings) can influence search results. For example, the `index.query.default_field` setting, which determines the field that is queried when a query specifies no explicit field. Use the [get index settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) to retrieve the settings for an index: ```console GET /my-index-000001/_settings ``` -You can update dynamic index settings with the [update index settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html). [Changing dynamic index settings for a data stream](../../manage-data/data-store/index-types/modify-data-stream.md#change-dynamic-index-setting-for-a-data-stream) requires changing the index template used by the data stream. +You can update dynamic index settings with the [update index settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings). [Changing dynamic index settings for a data stream](../../manage-data/data-store/index-types/modify-data-stream.md#change-dynamic-index-setting-for-a-data-stream) requires changing the index template used by the data stream. For static settings, you need to create a new index with the correct settings. Next, you can reindex the data into that index. For data streams, refer to [Change a static index setting for a data stream](../../manage-data/data-store/index-types/modify-data-stream.md#change-static-index-setting-for-a-data-stream). diff --git a/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md b/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md index 8fd0941c2..ef3a77704 100644 --- a/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md +++ b/troubleshoot/elasticsearch/troubleshooting-shards-capacity-issues.md @@ -6,7 +6,7 @@ mapped_pages: # Troubleshoot shard capacity health issues [troubleshooting-shards-capacity-issues] -{{es}} limits the maximum number of shards to be held per node using the [`cluster.max_shards_per_node`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node) and [`cluster.max_shards_per_node.frozen`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node-frozen) settings. The current shards capacity of the cluster is available in the [health API shards capacity section](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html#health-api-response-details-shards-capacity). +{{es}} limits the maximum number of shards to be held per node using the [`cluster.max_shards_per_node`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node) and [`cluster.max_shards_per_node.frozen`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node-frozen) settings. The current shards capacity of the cluster is available in the [health API shards capacity section](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report). ## Cluster is close to reaching the configured maximum number of shards for data nodes. [_cluster_is_close_to_reaching_the_configured_maximum_number_of_shards_for_data_nodes] @@ -15,7 +15,7 @@ The [`cluster.max_shards_per_node`](https://www.elastic.co/guide/en/elasticsearc This symptom indicates that action should be taken, otherwise, either the creation of new indices or upgrading the cluster could be blocked. -If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the [cluster update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html): +If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings): :::::::{tab-set} @@ -87,7 +87,7 @@ If you’re confident your changes won’t destabilize the cluster, you can temp This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or [reduce your cluster’s shard count](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#reduce-cluster-shard-count) on nodes that do not belong to the frozen tier. -6. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html#health-api-example): +6. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report): ```console GET _health_report/shards_capacity @@ -166,7 +166,7 @@ The response will look like this: 2. Current number of open shards across the cluster -Using the [`cluster settings API`](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html), update the [`cluster.max_shards_per_node`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node) setting: +Using the [`cluster settings API`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings), update the [`cluster.max_shards_per_node`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node) setting: ```console PUT _cluster/settings @@ -177,7 +177,7 @@ PUT _cluster/settings } ``` -This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or [reduce your cluster’s shard count](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#reduce-cluster-shard-count) on nodes that do not belong to the frozen tier. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html#health-api-example): +This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or [reduce your cluster’s shard count](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#reduce-cluster-shard-count) on nodes that do not belong to the frozen tier. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report): ```console GET _health_report/shards_capacity @@ -225,7 +225,7 @@ The [`cluster.max_shards_per_node.frozen`](https://www.elastic.co/guide/en/elast This symptom indicates that action should be taken, otherwise, either the creation of new indices or upgrading the cluster could be blocked. -If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the [cluster update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html): +If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings): :::::::{tab-set} @@ -296,7 +296,7 @@ If you’re confident your changes won’t destabilize the cluster, you can temp This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or [reduce your cluster’s shard count](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#reduce-cluster-shard-count) on nodes that belong to the frozen tier. -6. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html#health-api-example): +6. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report): ```console GET _health_report/shards_capacity @@ -373,7 +373,7 @@ GET _health_report/shards_capacity 2. Current number of open shards used by frozen nodes across the cluster. -Using the [`cluster settings API`](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html), update the [`cluster.max_shards_per_node.frozen`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node-frozen) setting: +Using the [`cluster settings API`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings), update the [`cluster.max_shards_per_node.frozen`](https://www.elastic.co/guide/en/elasticsearch/reference/current/misc-cluster-settings.html#cluster-max-shards-per-node-frozen) setting: ```console PUT _cluster/settings @@ -384,7 +384,7 @@ PUT _cluster/settings } ``` -This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or [reduce your cluster’s shard count](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#reduce-cluster-shard-count) on nodes that belong to the frozen tier. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html#health-api-example): +This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or [reduce your cluster’s shard count](../../deploy-manage/production-guidance/optimize-performance/size-shards.md#reduce-cluster-shard-count) on nodes that belong to the frozen tier. To verify that the change has fixed the issue, you can get the current status of the `shards_capacity` indicator by checking the `data` section of the [health API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report): ```console GET _health_report/shards_capacity diff --git a/troubleshoot/elasticsearch/troubleshooting-unbalanced-cluster.md b/troubleshoot/elasticsearch/troubleshooting-unbalanced-cluster.md index d43a4f385..0f6f3b7cd 100644 --- a/troubleshoot/elasticsearch/troubleshooting-unbalanced-cluster.md +++ b/troubleshoot/elasticsearch/troubleshooting-unbalanced-cluster.md @@ -24,7 +24,7 @@ Elasticsearch does not take into account the amount or complexity of search quer There is no guarantee that individual components will be evenly spread across the nodes. This could happen if some nodes have fewer shards, or are using less disk space, but are assigned shards with higher write loads. -Use the [cat allocation command](https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-allocation.html) to list workloads per node: +Use the [cat allocation command](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-allocation) to list workloads per node: ```console GET /_cat/allocation?v diff --git a/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md b/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md index 4cc009261..4277bcf68 100644 --- a/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md +++ b/troubleshoot/elasticsearch/troubleshooting-unstable-cluster.md @@ -60,7 +60,7 @@ Nodes will also log a message containing `master node changed` whenever they sta If a node restarts, it will leave the cluster and then join the cluster again. When it rejoins, the `NodeJoinExecutor` will log that it processed a `node-join` task indicating that the node is `joining after restart`. If a node is unexpectedly restarting, look at the node’s logs to see why it is shutting down. -The [Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/health-api.html) API on the affected node will also provide some useful information about the situation. +The [Health](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-health-report) API on the affected node will also provide some useful information about the situation. If the node did not restart then you should look at the reason for its departure more closely. Each reason has different troubleshooting steps, described below. There are three possible reasons: @@ -99,7 +99,7 @@ If you’re an advanced user, you can get more detailed information about what t logger.org.elasticsearch.cluster.coordination.LagDetector: DEBUG ``` -When this logger is enabled, {{es}} will attempt to run the [Nodes hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) API on the faulty node and report the results in the logs on the elected master. The results are compressed, encoded, and split into chunks to avoid truncation: +When this logger is enabled, {{es}} will attempt to run the [Nodes hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) API on the faulty node and report the results in the logs on the elected master. The results are compressed, encoded, and split into chunks to avoid truncation: ```text [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 1]: H4sIAAAAAAAA/x... @@ -131,7 +131,7 @@ If the last check failed with an exception then the exception is reported, and t * Packet captures will reveal system-level and network-level faults, especially if you capture the network traffic simultaneously at the elected master and the faulty node and analyse it alongside the {{es}} logs from those nodes. The connection used for follower checks is not used for any other traffic so it can be easily identified from the flow pattern alone, even if TLS is in use: almost exactly every second there will be a few hundred bytes sent each way, first the request by the master and then the response by the follower. You should be able to observe any retransmissions, packet loss, or other delays on such a connection. * Long waits for particular threads to be available can be identified by taking stack dumps of the main {{es}} process (for example, using `jstack`) or a profiling trace (for example, using Java Flight Recorder) in the few seconds leading up to the relevant log message. - The [Nodes hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. + The [Nodes hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) API sometimes yields useful information, but bear in mind that this API also requires a number of `transport_worker` and `generic` threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose. `jstack` is much more reliable since it doesn’t require any JVM threads. The threads involved in discovery and cluster membership are mainly `transport_worker` and `cluster_coordination` threads, for which there should never be a long wait. There may also be evidence of long waits for threads in the {{es}} logs, particularly looking at warning logs from `org.elasticsearch.transport.InboundHandler`. See [Networking threading model](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network-threading-model) for more information. @@ -149,7 +149,7 @@ To gather more information about the reason for shards shutting down slowly, con logger.org.elasticsearch.env.NodeEnvironment: DEBUG ``` -When this logger is enabled, {{es}} will attempt to run the [Nodes hot threads](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-hot-threads.html) API whenever it encounters a `ShardLockObtainFailedException`. The results are compressed, encoded, and split into chunks to avoid truncation: +When this logger is enabled, {{es}} will attempt to run the [Nodes hot threads](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) API whenever it encounters a `ShardLockObtainFailedException`. The results are compressed, encoded, and split into chunks to avoid truncation: ```text [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 1]: H4sIAAAAAAAA/x... diff --git a/troubleshoot/kibana/access.md b/troubleshoot/kibana/access.md index daf4585e9..3fa059e9c 100644 --- a/troubleshoot/kibana/access.md +++ b/troubleshoot/kibana/access.md @@ -64,7 +64,7 @@ Troubleshoot the `Kibana Server is not Ready yet` error. curl -XGET elasticsearch_ip_or_hostname:9200/_cat/indices/.kibana,.kibana_task_manager,.kibana_security_session?v=true ``` - These {{kib}}-backing indices must also not have [index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) flagging `read_only_allow_delete` or `write` [index blocks](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-blocks.html). + These {{kib}}-backing indices must also not have [index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) flagging `read_only_allow_delete` or `write` [index blocks](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-blocks.html). 3. [Shut down all {{kib}} nodes](../../deploy-manage/maintenance/start-stop-services/start-stop-kibana.md). 4. Choose any {{kib}} node, then update the config to set the [debug logging](../../deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md#change-overall-log-level). diff --git a/troubleshoot/kibana/error-server-not-ready.md b/troubleshoot/kibana/error-server-not-ready.md index 2855617e5..43ecfb304 100644 --- a/troubleshoot/kibana/error-server-not-ready.md +++ b/troubleshoot/kibana/error-server-not-ready.md @@ -64,7 +64,7 @@ Troubleshoot the `Kibana Server is not Ready yet` error. curl -XGET elasticsearch_ip_or_hostname:9200/_cat/indices/.kibana,.kibana_task_manager,.kibana_security_session?v=true ``` - These {{kib}}-backing indices must also not have [index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-settings.html) flagging `read_only_allow_delete` or `write` [index blocks](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-blocks.html). + These {{kib}}-backing indices must also not have [index settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) flagging `read_only_allow_delete` or `write` [index blocks](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-blocks.html). 3. [Shut down all {{kib}} nodes](../../deploy-manage/maintenance/start-stop-services/start-stop-kibana.md). 4. Choose any {{kib}} node, then update the config to set the [debug logging](../../deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md#change-overall-log-level). diff --git a/troubleshoot/kibana/maps.md b/troubleshoot/kibana/maps.md index d158f2ff8..f271648b8 100644 --- a/troubleshoot/kibana/maps.md +++ b/troubleshoot/kibana/maps.md @@ -14,7 +14,7 @@ Use the information in this section to inspect Elasticsearch requests and find s ## Inspect Elasticsearch requests [_inspect_elasticsearch_requests] -Maps uses the [{{es}} vector tile search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-vector-tile-api.html) and the [{{es}} search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) to get documents and aggregation results from {{es}}. Use **Vector tiles** inspector to view {{es}} vector tile search API requests. Use **Requests** inspector to view {{es}} search API requests. +Maps uses the [{{es}} vector tile search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search-mvt) and the [{{es}} search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) to get documents and aggregation results from {{es}}. Use **Vector tiles** inspector to view {{es}} vector tile search API requests. Use **Requests** inspector to view {{es}} search API requests. :::{image} ../../images/kibana-vector_tile_inspector.png :alt: vector tile inspector diff --git a/troubleshoot/observability/apm/known-issues.md b/troubleshoot/observability/apm/known-issues.md index c9ed73fe9..324c989f3 100644 --- a/troubleshoot/observability/apm/known-issues.md +++ b/troubleshoot/observability/apm/known-issues.md @@ -111,7 +111,7 @@ There are three ways to fix this error: 1. Find broken rules :::::{admonition} - To identify rules in this exact state, you can use the [find rules endpoint](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-alerting) and search for the APM anomaly rule type as well as this exact error message indicating that the rule is in the broken state. We will also use the `fields` parameter to specify only the fields required when making the update request later. + To identify rules in this exact state, you can use the [find rules endpoint](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-alerting) and search for the APM anomaly rule type as well as this exact error message indicating that the rule is in the broken state. We will also use the `fields` parameter to specify only the fields required when making the update request later. * `search_fields=alertTypeId` * `search=apm.anomaly` @@ -188,7 +188,7 @@ There are three ways to fix this error: 3. Update each rule using the `PUT /api/alerting/rule/{{id}}` API ::::{admonition} - For each rule, submit a PUT request to the [update rule endpoint](https://www.elastic.co/docs/api/doc/kibana/v8/group/endpoint-alerting) using that rule’s ID and its stored update document from the previous step. For example, assuming the first broken rule’s ID is `046c0d4f`: + For each rule, submit a PUT request to the [update rule endpoint](https://www.elastic.co/docs/api/doc/kibana/group/endpoint-alerting) using that rule’s ID and its stored update document from the previous step. For example, assuming the first broken rule’s ID is `046c0d4f`: ```shell curl -u "$KIBANA_USER":"$KIBANA_PASSWORD" -XPUT "$KIBANA_URL/api/alerting/rule/046c0d4f" -H 'Content-Type: application/json' -H 'kbn-xsrf: rule-update' -d @046c0d4f.json diff --git a/troubleshoot/observability/troubleshoot-mapping-issues.md b/troubleshoot/observability/troubleshoot-mapping-issues.md index 7e8b95c78..a8f038d05 100644 --- a/troubleshoot/observability/troubleshoot-mapping-issues.md +++ b/troubleshoot/observability/troubleshoot-mapping-issues.md @@ -23,7 +23,7 @@ It is necessary to stop all {{heartbeat}}/{{elastic-agent}} instances that are t To ensure the mapping is applied to all {{heartbeat}} data going forward, delete all the {{heartbeat}} indices that match the pattern the {{uptime-app}} will use. -There are multiple ways to achieve this. You can read about performing this using the [Index Management UI](../../manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md) or with the [Delete index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html). +There are multiple ways to achieve this. You can read about performing this using the [Index Management UI](../../manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md) or with the [Delete index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete). If using {{elastic-agent}} you will want to fix any issues with custom data stream mappings. We encourage the use of {{fleet}} to eliminate this issue. diff --git a/troubleshoot/security/detection-rules.md b/troubleshoot/security/detection-rules.md index 826413dce..a8442b50a 100644 --- a/troubleshoot/security/detection-rules.md +++ b/troubleshoot/security/detection-rules.md @@ -98,7 +98,7 @@ A field can have type conflicts *and* be unmapped in specified indices. ### Fields with conflicting types [fields-with-conflicting-types] -Type conflicts occur when a field is mapped to different types across multiple indices. To resolve this issue, you can create new indices with matching field type mappings and [reindex your data](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html). Otherwise, use the information about a field’s type mappings to ensure you’re entering compatible field values when defining exception conditions. +Type conflicts occur when a field is mapped to different types across multiple indices. To resolve this issue, you can create new indices with matching field type mappings and [reindex your data](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). Otherwise, use the information about a field’s type mappings to ensure you’re entering compatible field values when defining exception conditions. In the following example, the selected field has been defined as different types across five indices.