From e40d50a9ac8d9ff8aaf12a7519844cb4084dcd15 Mon Sep 17 00:00:00 2001 From: Colleen McGinnis Date: Wed, 12 Feb 2025 16:15:58 -0600 Subject: [PATCH] replace api links --- manage-data/data-store/aliases.md | 12 +++---- .../downsampling-time-series-data-stream.md | 4 +-- .../index-types/modify-data-stream.md | 28 +++++++-------- .../index-types/run-downsampling-manually.md | 6 ++-- ...ownsampling-using-data-stream-lifecycle.md | 6 ++-- .../index-types/run-downsampling-with-ilm.md | 2 +- .../index-types/set-up-data-stream.md | 14 ++++---- .../data-store/index-types/set-up-tsds.md | 6 ++-- .../time-series-data-stream-tsds.md | 4 +-- manage-data/data-store/index-types/tsdb.md | 4 +-- .../data-store/index-types/use-data-stream.md | 36 +++++++++---------- manage-data/data-store/mapping.md | 4 +-- .../mapping/dynamic-field-mapping.md | 2 +- .../data-store/mapping/dynamic-templates.md | 4 +-- .../data-store/mapping/explicit-mapping.md | 10 +++--- .../explore-data-with-runtime-fields.md | 2 +- .../data-store/mapping/runtime-fields.md | 2 +- manage-data/data-store/templates.md | 10 +++--- .../text-analysis/specify-an-analyzer.md | 14 ++++---- .../text-analysis/test-an-analyzer.md | 2 +- manage-data/ingest.md | 2 +- .../ingesting-data-for-elastic-solutions.md | 4 +-- ...ta-with-nodejs-on-elasticsearch-service.md | 2 +- ...ta-with-python-on-elasticsearch-service.md | 2 +- ...ample-enrich-data-based-on-exact-values.md | 8 ++--- ...xample-enrich-data-based-on-geolocation.md | 12 +++---- ...-enrich-data-by-matching-value-to-range.md | 8 ++--- .../transform-enrich/ingest-pipelines.md | 20 +++++------ .../set-up-an-enrich-processor.md | 28 +++++++-------- manage-data/lifecycle/data-stream.md | 6 ++-- ...orial-create-data-stream-with-lifecycle.md | 8 ++--- .../tutorial-data-stream-retention.md | 14 ++++---- ...ed-data-stream-to-data-stream-lifecycle.md | 8 ++--- .../tutorial-update-existing-data-stream.md | 8 ++--- .../configure-lifecycle-policy.md | 16 ++++----- .../index-lifecycle.md | 2 +- .../index-management-in-kibana.md | 10 +++--- .../manage-existing-indices.md | 14 ++++---- .../restore-managed-data-stream-index.md | 8 ++--- .../index-lifecycle-management/rollover.md | 2 +- .../skip-rollover.md | 2 +- .../start-stop-index-lifecycle-management.md | 6 ++-- .../tutorial-automate-rollover.md | 4 +-- .../tutorial-customize-built-in-policies.md | 6 ++-- .../rollup/getting-started-with-rollups.md | 4 +-- .../rollup/rollup-search-limitations.md | 4 +-- ...signed-certificate-using-remote-reindex.md | 4 +-- ...lasticsearch-to-manage-time-series-data.md | 14 ++++---- 48 files changed, 199 insertions(+), 199 deletions(-) diff --git a/manage-data/data-store/aliases.md b/manage-data/data-store/aliases.md index 11ee97a9f..4d3773a55 100644 --- a/manage-data/data-store/aliases.md +++ b/manage-data/data-store/aliases.md @@ -11,7 +11,7 @@ Aliases enable you to: * Query multiple indices/data streams together with a single name * Change which indices/data streams your application uses in real time -* [Reindex](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) data without downtime +* [Reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) data without downtime ## Alias types [alias-types] @@ -26,7 +26,7 @@ An alias cannot point to both data streams and indices. You also cannot add a da ## Add an alias [add-alias] -To add an existing data stream or index to an alias, use the [aliases API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html)'s `add` action. If the alias doesn’t exist, the request creates it. +To add an existing data stream or index to an alias, use the [aliases API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-update-aliases)'s `add` action. If the alias doesn’t exist, the request creates it. ```console POST _aliases @@ -169,7 +169,7 @@ Allowing the action list to succeed partially may not provide the desired result ## Add an alias at index creation [add-alias-at-creation] -You can also use a [component](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html) or [index template](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html) to add index or data stream aliases when they are created. +You can also use a [component](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template) or [index template](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template) to add index or data stream aliases when they are created. ```console # Component template with index aliases @@ -201,7 +201,7 @@ PUT _index_template/my-index-template } ``` -You can also specify index aliases in [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) requests. +You can also specify index aliases in [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) requests. ```console # PUT @@ -216,7 +216,7 @@ PUT %3Cmy-index-%7Bnow%2Fd%7D-000001%3E ## View aliases [view-aliases] -To get a list of your cluster’s aliases, use the [get alias API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-alias.html) with no argument. +To get a list of your cluster’s aliases, use the [get alias API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-alias) with no argument. ```console GET _alias @@ -306,7 +306,7 @@ POST _aliases ``` ::::{note} -Filters are only applied when using the [Query DSL](../../explore-analyze/query-filter/languages/querydsl.md), and are not applied when [retrieving a document by ID](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html). +Filters are only applied when using the [Query DSL](../../explore-analyze/query-filter/languages/querydsl.md), and are not applied when [retrieving a document by ID](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get). :::: diff --git a/manage-data/data-store/index-types/downsampling-time-series-data-stream.md b/manage-data/data-store/index-types/downsampling-time-series-data-stream.md index 88059a4c4..c57804c15 100644 --- a/manage-data/data-store/index-types/downsampling-time-series-data-stream.md +++ b/manage-data/data-store/index-types/downsampling-time-series-data-stream.md @@ -74,7 +74,7 @@ Fields in the target, downsampled index are created based on fields in the origi ## Running downsampling on time series data [running-downsampling] -To downsample a time series index, use the [Downsample API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-downsample-data-stream.html) and set `fixed_interval` to the level of granularity that you’d like: +To downsample a time series index, use the [Downsample API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-downsample) and set `fixed_interval` to the level of granularity that you’d like: ```console POST /my-time-series-index/_downsample/my-downsampled-time-series-index @@ -105,7 +105,7 @@ PUT _ilm/policy/my_policy ## Querying downsampled indices [querying-downsampled-indices] -You can use the [`_search`](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) and [`_async_search`](https://www.elastic.co/guide/en/elasticsearch/reference/current/async-search.html) endpoints to query a downsampled index. Multiple raw data and downsampled indices can be queried in a single request, and a single request can include downsampled indices at different granularities (different bucket timespan). That is, you can query data streams that contain downsampled indices with multiple downsampling intervals (for example, `15m`, `1h`, `1d`). +You can use the [`_search`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) and [`_async_search`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-async-search-submit) endpoints to query a downsampled index. Multiple raw data and downsampled indices can be queried in a single request, and a single request can include downsampled indices at different granularities (different bucket timespan). That is, you can query data streams that contain downsampled indices with multiple downsampling intervals (for example, `15m`, `1h`, `1d`). The result of a time based histogram aggregation is in a uniform bucket size and each downsampled index returns data ignoring the downsampling time interval. For example, if you run a `date_histogram` aggregation with `"fixed_interval": "1m"` on a downsampled index that has been downsampled at an hourly resolution (`"fixed_interval": "1h"`), the query returns one bucket with all of the data at minute 0, then 59 empty buckets, and then a bucket with data again for the next hour. diff --git a/manage-data/data-store/index-types/modify-data-stream.md b/manage-data/data-store/index-types/modify-data-stream.md index 7fef11dcd..40a719468 100644 --- a/manage-data/data-store/index-types/modify-data-stream.md +++ b/manage-data/data-store/index-types/modify-data-stream.md @@ -55,7 +55,7 @@ To add a mapping for a new field to a data stream, following these steps: 1. Adds a mapping for the new `message` field. -2. Use the [update mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) to add the new field mapping to the data stream. By default, this adds the mapping to the stream’s existing backing indices, including the write index. +2. Use the [update mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) to add the new field mapping to the data stream. By default, this adds the mapping to the stream’s existing backing indices, including the write index. The following update mapping API request adds the new `message` field mapping to `my-data-stream`. @@ -89,7 +89,7 @@ To add a mapping for a new field to a data stream, following these steps: ### Change an existing field mapping in a data stream [change-existing-field-mapping-in-a-data-stream] -The documentation for each [mapping parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-params.html) indicates whether you can update it for an existing field using the [update mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html). To update these parameters for an existing field, follow these steps: +The documentation for each [mapping parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-params.html) indicates whether you can update it for an existing field using the [update mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping). To update these parameters for an existing field, follow these steps: 1. Update the index template used by the data stream. This ensures the updated field mapping is added to future backing indices created for the stream. @@ -122,9 +122,9 @@ The documentation for each [mapping parameter](https://www.elastic.co/guide/en/e 1. Changes the `host.ip` field’s `ignore_malformed` value to `true`. -2. Use the [update mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) to apply the mapping changes to the data stream. By default, this applies the changes to the stream’s existing backing indices, including the write index. +2. Use the [update mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) to apply the mapping changes to the data stream. By default, this applies the changes to the stream’s existing backing indices, including the write index. - The following [update mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) request targets `my-data-stream`. The request changes the argument for the `host.ip` field’s `ignore_malformed` mapping parameter to `true`. + The following [update mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) request targets `my-data-stream`. The request changes the argument for the `host.ip` field’s `ignore_malformed` mapping parameter to `true`. ```console PUT /my-data-stream/_mapping @@ -194,7 +194,7 @@ To change a [dynamic index setting](https://www.elastic.co/guide/en/elasticsearc 1. Changes the `index.refresh_interval` setting to `30s` (30 seconds). -2. Use the [update index settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) to update the index setting for the data stream. By default, this applies the setting to the stream’s existing backing indices, including the write index. +2. Use the [update index settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) to update the index setting for the data stream. By default, this applies the setting to the stream’s existing backing indices, including the write index. The following update index settings API request updates the `index.refresh_interval` setting for `my-data-stream`. @@ -209,14 +209,14 @@ To change a [dynamic index setting](https://www.elastic.co/guide/en/elasticsearc ::::{important} -To change the `index.lifecycle.name` setting, first use the [remove policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-remove-policy.html) to remove the existing {{ilm-init}} policy. See [Switch lifecycle policies](../../lifecycle/index-lifecycle-management/configure-lifecycle-policy.md#switch-lifecycle-policies). +To change the `index.lifecycle.name` setting, first use the [remove policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-remove-policy) to remove the existing {{ilm-init}} policy. See [Switch lifecycle policies](../../lifecycle/index-lifecycle-management/configure-lifecycle-policy.md#switch-lifecycle-policies). :::: ### Change a static index setting for a data stream [change-static-index-setting-for-a-data-stream] -[Static index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-modules-settings) can only be set when a backing index is created. You cannot update static index settings using the [update index settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html). +[Static index settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-modules-settings) can only be set when a backing index is created. You cannot update static index settings using the [update index settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings). To apply a new static setting to future backing indices, update the index template used by the data stream. The setting is automatically applied to any backing index created after the update. @@ -319,7 +319,7 @@ Follow these steps: 2. Adds the `sort.field` index setting. 3. Adds the `sort.order` index setting. -3. Use the [create data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-data-stream.html) to manually create the new data stream. The name of the data stream must match the index pattern defined in the new template’s `index_patterns` property. +3. Use the [create data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create-data-stream) to manually create the new data stream. The name of the data stream must match the index pattern defined in the new template’s `index_patterns` property. We do not recommend [indexing new data to create this data stream](set-up-data-stream.md#create-data-stream). Later, you will reindex older data from an existing data stream into this new stream. This could result in one or more backing indices that contains a mix of new and old data. @@ -341,7 +341,7 @@ Follow these steps: 4. If you do not want to mix new and old data in your new data stream, pause the indexing of new documents. While mixing old and new data is safe, it could interfere with data retention. See [Mixing new and old data in a data stream](modify-data-stream.md#data-stream-mix-new-old-data). 5. If you use {{ilm-init}} to [automate rollover](../../lifecycle/index-lifecycle-management/tutorial-automate-rollover.md), reduce the {{ilm-init}} poll interval. This ensures the current write index doesn’t grow too large while waiting for the rollover check. By default, {{ilm-init}} checks rollover conditions every 10 minutes. - The following [cluster update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html) request lowers the `indices.lifecycle.poll_interval` setting to `1m` (one minute). + The following [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) request lowers the `indices.lifecycle.poll_interval` setting to `1m` (one minute). ```console PUT /_cluster/settings @@ -354,7 +354,7 @@ Follow these steps: 6. Reindex your data to the new data stream using an `op_type` of `create`. - If you want to partition the data in the order in which it was originally indexed, you can run separate reindex requests. These reindex requests can use individual backing indices as the source. You can use the [get data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-data-stream.html) to retrieve a list of backing indices. + If you want to partition the data in the order in which it was originally indexed, you can run separate reindex requests. These reindex requests can use individual backing indices as the source. You can use the [get data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream) to retrieve a list of backing indices. For example, you plan to reindex data from `my-data-stream` into `new-data-stream`. However, you want to submit a separate reindex request for each backing index in `my-data-stream`, starting with the oldest backing index. This preserves the order in which the data was originally indexed. @@ -406,7 +406,7 @@ Follow these steps: 1. First item in the `indices` array for `my-data-stream`. This item contains information about the stream’s oldest backing index, `.ds-my-data-stream-2099.03.07-000001`. - The following [reindex API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) request copies documents from `.ds-my-data-stream-2099.03.07-000001` to `new-data-stream`. The request’s `op_type` is `create`. + The following [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) request copies documents from `.ds-my-data-stream-2099.03.07-000001` to `new-data-stream`. The request’s `op_type` is `create`. ```console POST /_reindex @@ -423,7 +423,7 @@ Follow these steps: You can also use a query to reindex only a subset of documents with each request. - The following [reindex API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) request copies documents from `my-data-stream` to `new-data-stream`. The request uses a [`range` query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-range-query.html) to only reindex documents with a timestamp within the last week. Note the request’s `op_type` is `create`. + The following [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) request copies documents from `my-data-stream` to `new-data-stream`. The request uses a [`range` query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-range-query.html) to only reindex documents with a timestamp within the last week. Note the request’s `op_type` is `create`. ```console POST /_reindex @@ -462,7 +462,7 @@ Follow these steps: 8. Resume indexing using the new data stream. Searches on this stream will now query your new data and the reindexed data. 9. Once you have verified that all reindexed data is available in the new data stream, you can safely remove the old stream. - The following [delete data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-data-stream.html) request deletes `my-data-stream`. This request also deletes the stream’s backing indices and any data they contain. + The following [delete data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-stream) request deletes `my-data-stream`. This request also deletes the stream’s backing indices and any data they contain. ```console DELETE /_data_stream/my-data-stream @@ -472,7 +472,7 @@ Follow these steps: ## Update or add an alias to a data stream [data-streams-change-alias] -Use the [aliases API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html) to update an existing data stream’s aliases. Changing an existing data stream’s aliases in its index pattern has no effect. +Use the [aliases API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-update-aliases) to update an existing data stream’s aliases. Changing an existing data stream’s aliases in its index pattern has no effect. For example, the `logs` alias points to a single data stream. The following request swaps the stream for the alias. During this swap, the `logs` alias has no downtime and never points to both streams at the same time. diff --git a/manage-data/data-store/index-types/run-downsampling-manually.md b/manage-data/data-store/index-types/run-downsampling-manually.md index c703e562a..78368aab7 100644 --- a/manage-data/data-store/index-types/run-downsampling-manually.md +++ b/manage-data/data-store/index-types/run-downsampling-manually.md @@ -349,7 +349,7 @@ This returns: Before a backing index can be downsampled, the TSDS needs to be rolled over and the old index needs to be made read-only. -Roll over the TSDS using the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html): +Roll over the TSDS using the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover): ```console POST /my-data-stream/_rollover/ @@ -363,7 +363,7 @@ The old index needs to be set to read-only mode. Run the following request: PUT /.ds-my-data-stream-2023.07.26-000001/_block/write ``` -Next, use the [downsample API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-downsample-data-stream.html) to downsample the index, setting the time series interval to one hour: +Next, use the [downsample API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-downsample) to downsample the index, setting the time series interval to one hour: ```console POST /.ds-my-data-stream-2023.07.26-000001/_downsample/.ds-my-data-stream-2023.07.26-000001-downsample @@ -372,7 +372,7 @@ POST /.ds-my-data-stream-2023.07.26-000001/_downsample/.ds-my-data-stream-2023.0 } ``` -Now you can [modify the data stream](https://www.elastic.co/guide/en/elasticsearch/reference/current/modify-data-streams-api.html), and replace the original index with the downsampled one: +Now you can [modify the data stream](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-modify-data-stream), and replace the original index with the downsampled one: ```console POST _data_stream/_modify diff --git a/manage-data/data-store/index-types/run-downsampling-using-data-stream-lifecycle.md b/manage-data/data-store/index-types/run-downsampling-using-data-stream-lifecycle.md index f06b78368..bdf9a4865 100644 --- a/manage-data/data-store/index-types/run-downsampling-using-data-stream-lifecycle.md +++ b/manage-data/data-store/index-types/run-downsampling-using-data-stream-lifecycle.md @@ -32,7 +32,7 @@ For simplicity, in the time series mapping all `time_series_metric` parameters a The index template includes a set of static [time series dimensions](time-series-data-stream-tsds.md#time-series-dimension): `host`, `namespace`, `node`, and `pod`. The time series dimensions are not changed by the downsampling process. -To enable downsampling, this template includes a `lifecycle` section with [downsampling](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html#data-streams-put-lifecycle-downsampling-example) object. `fixed_interval` parameter sets downsampling interval at which you want to aggregate the original time series data. `after` parameter specifies how much time after index was rolled over should pass before downsampling is performed. +To enable downsampling, this template includes a `lifecycle` section with [downsampling](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle) object. `fixed_interval` parameter sets downsampling interval at which you want to aggregate the original time series data. `after` parameter specifies how much time after index was rolled over should pass before downsampling is performed. ```console PUT _index_template/datastream_template @@ -305,7 +305,7 @@ The query returns your ten newly added documents. Data stream lifecycle will automatically roll over data stream and perform downsampling. This step is only needed in order to see downsampling results in scope of this tutorial. -Roll over the data stream using the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html): +Roll over the data stream using the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover): ```console POST /datastream/_rollover/ @@ -460,7 +460,7 @@ The new downsampled index contains just one document that includes the `min`, `m } ``` -Use the [data stream stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-stats-api.html) to get statistics for the data stream, including the storage size. +Use the [data stream stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-data-streams-stats-1) to get statistics for the data stream, including the storage size. ```console GET /_data_stream/datastream/_stats?human=true diff --git a/manage-data/data-store/index-types/run-downsampling-with-ilm.md b/manage-data/data-store/index-types/run-downsampling-with-ilm.md index a3e42adca..fff6ba026 100644 --- a/manage-data/data-store/index-types/run-downsampling-with-ilm.md +++ b/manage-data/data-store/index-types/run-downsampling-with-ilm.md @@ -433,7 +433,7 @@ The new downsampled index contains just one document that includes the `min`, `m } ``` -Use the [data stream stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-stats-api.html) to get statistics for the data stream, including the storage size. +Use the [data stream stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-data-streams-stats-1) to get statistics for the data stream, including the storage size. ```console GET /_data_stream/datastream/_stats?human=true diff --git a/manage-data/data-store/index-types/set-up-data-stream.md b/manage-data/data-store/index-types/set-up-data-stream.md index bacb5936f..ce3fad0ef 100644 --- a/manage-data/data-store/index-types/set-up-data-stream.md +++ b/manage-data/data-store/index-types/set-up-data-stream.md @@ -30,7 +30,7 @@ While optional, we recommend using {{ilm-init}} to automate the management of yo To create an index lifecycle policy in {{kib}}, open the main menu and go to **Stack Management > Index Lifecycle Policies**. Click **Create policy**. -You can also use the [create lifecycle policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-put-lifecycle.html). +You can also use the [create lifecycle policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle). ```console PUT _ilm/policy/my-lifecycle-policy @@ -102,7 +102,7 @@ If you’re unsure how to map your fields, use [runtime fields](../mapping/defin To create a component template in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Index Templates** view, click **Create component template**. -You can also use the [create component template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html). +You can also use the [create component template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template). ```console # Creates a component template for mappings @@ -154,7 +154,7 @@ Use your component templates to create an index template. Specify: To create an index template in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Index Templates** view, click **Create template**. -You can also use the [create index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html). Include the `data_stream` object to enable data streams. +You can also use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template). Include the `data_stream` object to enable data streams. ```console PUT _index_template/my-index-template @@ -191,7 +191,7 @@ POST my-data-stream/_doc } ``` -You can also manually create the stream using the [create data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-data-stream.html). The stream’s name must still match one of your template’s index patterns. +You can also manually create the stream using the [create data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create-data-stream). The stream’s name must still match one of your template’s index patterns. ```console PUT _data_stream/my-data-stream @@ -209,7 +209,7 @@ For an example, see [Data stream privileges](../../../deploy-manage/users-roles/ Prior to {{es}} 7.9, you’d typically use an [index alias with a write index](../../lifecycle/index-lifecycle-management/tutorial-automate-rollover.md#manage-time-series-data-without-data-streams) to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with [data tiers](../../lifecycle/data-tiers.md). -To convert an index alias with a write index to a data stream with the same name, use the [migrate to data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-migrate-to-data-stream.html). During conversion, the alias’s indices become hidden backing indices for the stream. The alias’s write index becomes the stream’s write index. The stream still requires a matching index template with data stream enabled. +To convert an index alias with a write index to a data stream with the same name, use the [migrate to data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-migrate-to-data-stream). During conversion, the alias’s indices become hidden backing indices for the stream. The alias’s write index becomes the stream’s write index. The stream still requires a matching index template with data stream enabled. ```console POST _data_stream/_migrate/my-time-series-data @@ -220,7 +220,7 @@ POST _data_stream/_migrate/my-time-series-data To get information about a data stream in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Data Streams** view, click the data stream’s name. -You can also use the [get data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-data-stream.html). +You can also use the [get data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream). ```console GET _data_stream/my-data-stream @@ -231,7 +231,7 @@ GET _data_stream/my-data-stream To delete a data stream and its backing indices in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Data Streams** view, click the trash icon. The icon only displays if you have the `delete_index` [security privilege](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) for the data stream. -You can also use the [delete data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-data-stream.html). +You can also use the [delete data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-stream). ```console DELETE _data_stream/my-data-stream diff --git a/manage-data/data-store/index-types/set-up-tsds.md b/manage-data/data-store/index-types/set-up-tsds.md index cdddd08ea..aba61d16d 100644 --- a/manage-data/data-store/index-types/set-up-tsds.md +++ b/manage-data/data-store/index-types/set-up-tsds.md @@ -180,7 +180,7 @@ POST metrics-weather_sensors-dev/_doc } ``` -You can also manually create the TSDS using the [create data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-data-stream.html). The TSDS’s name must still match one of your template’s index patterns. +You can also manually create the TSDS using the [create data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create-data-stream). The TSDS’s name must still match one of your template’s index patterns. ```console PUT _data_stream/metrics-weather_sensors-dev @@ -201,7 +201,7 @@ You can also use the above steps to convert an existing regular data stream to a * Edit your existing index lifecycle policy, component templates, and index templates instead of creating new ones. * Instead of creating the TSDS, manually roll over its write index. This ensures the current write index and any new backing indices have an [`index.mode` of `time_series`](time-series-data-stream-tsds.md#time-series-mode). - You can manually roll over the write index using the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html). + You can manually roll over the write index using the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover). ```console POST metrics-weather_sensors-dev/_rollover @@ -222,5 +222,5 @@ Now that you’ve set up your TSDS, you can manage and use it like a regular dat * [*Use a data stream*](use-data-stream.md) * [Change mappings and settings for a data stream](modify-data-stream.md#data-streams-change-mappings-and-settings) -* [data stream APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-apis.html) +* [data stream APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream) diff --git a/manage-data/data-store/index-types/time-series-data-stream-tsds.md b/manage-data/data-store/index-types/time-series-data-stream-tsds.md index 01a0bacec..1dec79306 100644 --- a/manage-data/data-store/index-types/time-series-data-stream-tsds.md +++ b/manage-data/data-store/index-types/time-series-data-stream-tsds.md @@ -129,7 +129,7 @@ If you convert an existing data stream to a TSDS, only backing indices created a When you add a document to a TSDS, {{es}} automatically generates a `_tsid` metadata field for the document. The `_tsid` is an object containing the document’s dimensions. Documents in the same TSDS with the same `_tsid` are part of the same time series. -The `_tsid` field is not queryable or updatable. You also can’t retrieve a document’s `_tsid` using a [get document](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html) request. However, you can use the `_tsid` field in aggregations and retrieve the `_tsid` value in searches using the [`fields` parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-fields.html#search-fields-param). +The `_tsid` field is not queryable or updatable. You also can’t retrieve a document’s `_tsid` using a [get document](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get) request. However, you can use the `_tsid` field in aggregations and retrieve the `_tsid` value in searches using the [`fields` parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-fields.html#search-fields-param). ::::{warning} The format of the `_tsid` field shouldn’t be relied upon. It may change from version to version. @@ -188,7 +188,7 @@ A TSDS is designed to ingest current metrics data. When the TSDS is first create Only data that falls inside that range can be indexed. -You can use the [get data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-data-stream.html) to check the accepted time range for writing to any TSDS. +You can use the [get data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream) to check the accepted time range for writing to any TSDS. ### Dimension-based routing [dimension-based-routing] diff --git a/manage-data/data-store/index-types/tsdb.md b/manage-data/data-store/index-types/tsdb.md index 11c9d689d..d25af6cae 100644 --- a/manage-data/data-store/index-types/tsdb.md +++ b/manage-data/data-store/index-types/tsdb.md @@ -129,7 +129,7 @@ If you convert an existing data stream to a TSDS, only backing indices created a When you add a document to a TSDS, {{es}} automatically generates a `_tsid` metadata field for the document. The `_tsid` is an object containing the document’s dimensions. Documents in the same TSDS with the same `_tsid` are part of the same time series. -The `_tsid` field is not queryable or updatable. You also can’t retrieve a document’s `_tsid` using a [get document](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html) request. However, you can use the `_tsid` field in aggregations and retrieve the `_tsid` value in searches using the [`fields` parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-fields.html#search-fields-param). +The `_tsid` field is not queryable or updatable. You also can’t retrieve a document’s `_tsid` using a [get document](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get) request. However, you can use the `_tsid` field in aggregations and retrieve the `_tsid` value in searches using the [`fields` parameter](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-fields.html#search-fields-param). ::::{warning} The format of the `_tsid` field shouldn’t be relied upon. It may change from version to version. @@ -188,7 +188,7 @@ A TSDS is designed to ingest current metrics data. When the TSDS is first create Only data that falls inside that range can be indexed. -You can use the [get data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-data-stream.html) to check the accepted time range for writing to any TSDS. +You can use the [get data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream) to check the accepted time range for writing to any TSDS. ### Dimension-based routing [dimension-based-routing] diff --git a/manage-data/data-store/index-types/use-data-stream.md b/manage-data/data-store/index-types/use-data-stream.md index f7a7a8fcd..afac70f62 100644 --- a/manage-data/data-store/index-types/use-data-stream.md +++ b/manage-data/data-store/index-types/use-data-stream.md @@ -20,7 +20,7 @@ After you [set up a data stream](set-up-data-stream.md), you can do the followin ## Add documents to a data stream [add-documents-to-a-data-stream] -To add an individual document, use the [index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html). [Ingest pipelines](../../ingest/transform-enrich/ingest-pipelines.md) are supported. +To add an individual document, use the [index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create). [Ingest pipelines](../../ingest/transform-enrich/ingest-pipelines.md) are supported. ```console POST /my-data-stream/_doc/ @@ -33,9 +33,9 @@ POST /my-data-stream/_doc/ } ``` -You cannot add new documents to a data stream using the index API’s `PUT //_doc/<_id>` request format. To specify a document ID, use the `PUT //_create/<_id>` format instead. Only an [`op_type`](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#docs-index-api-op_type) of `create` is supported. +You cannot add new documents to a data stream using the index API’s `PUT //_doc/<_id>` request format. To specify a document ID, use the `PUT //_create/<_id>` format instead. Only an [`op_type`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create#docs-index-api-op_type) of `create` is supported. -To add multiple documents with a single request, use the [bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html). Only `create` actions are supported. +To add multiple documents with a single request, use the [bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk). Only `create` actions are supported. ```console PUT /my-data-stream/_bulk?refresh @@ -52,16 +52,16 @@ PUT /my-data-stream/_bulk?refresh The following search APIs support data streams: -* [Search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) -* [Async search](https://www.elastic.co/guide/en/elasticsearch/reference/current/async-search.html) -* [Multi search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) -* [Field capabilities](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-field-caps.html) -* [EQL search](https://www.elastic.co/guide/en/elasticsearch/reference/current/eql-search-api.html) +* [Search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) +* [Async search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-async-search-submit) +* [Multi search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) +* [Field capabilities](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-field-caps) +* [EQL search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-eql-search) ## Get statistics for a data stream [get-stats-for-a-data-stream] -Use the [data stream stats API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-stats-api.html) to get statistics for one or more data streams: +Use the [data stream stats API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-data-streams-stats-1) to get statistics for one or more data streams: ```console GET /_data_stream/my-data-stream/_stats?human=true @@ -70,7 +70,7 @@ GET /_data_stream/my-data-stream/_stats?human=true ## Manually roll over a data stream [manually-roll-over-a-data-stream] -Use the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html) to manually [roll over](data-streams.md#data-streams-rollover) a data stream. You have two options when manually rolling over: +Use the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover) to manually [roll over](data-streams.md#data-streams-rollover) a data stream. You have two options when manually rolling over: 1. To immediately trigger a rollover: @@ -90,9 +90,9 @@ Use the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/c ## Open closed backing indices [open-closed-backing-indices] -You cannot search a [closed](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-close.html) backing index, even by searching its data stream. You also cannot [update](#update-docs-in-a-data-stream-by-query) or [delete](#delete-docs-in-a-data-stream-by-query) documents in a closed index. +You cannot search a [closed](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-close) backing index, even by searching its data stream. You also cannot [update](#update-docs-in-a-data-stream-by-query) or [delete](#delete-docs-in-a-data-stream-by-query) documents in a closed index. -To re-open a closed backing index, submit an [open index API request](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-open-close.html) directly to the index: +To re-open a closed backing index, submit an [open index API request](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-open) directly to the index: ```console POST /.ds-my-data-stream-2099.03.07-000001/_open/ @@ -107,7 +107,7 @@ POST /my-data-stream/_open/ ## Reindex with a data stream [reindex-with-a-data-stream] -Use the [reindex API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) to copy documents from an existing index, alias, or data stream to a data stream. Because data streams are [append-only](data-streams.md#data-streams-append-only), a reindex into a data stream must use an `op_type` of `create`. A reindex cannot update existing documents in a data stream. +Use the [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) to copy documents from an existing index, alias, or data stream to a data stream. Because data streams are [append-only](data-streams.md#data-streams-append-only), a reindex into a data stream must use an `op_type` of `create`. A reindex cannot update existing documents in a data stream. ```console POST /_reindex @@ -125,7 +125,7 @@ POST /_reindex ## Update documents in a data stream by query [update-docs-in-a-data-stream-by-query] -Use the [update by query API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html) to update documents in a data stream that match a provided query: +Use the [update by query API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-update-by-query) to update documents in a data stream that match a provided query: ```console POST /my-data-stream/_update_by_query @@ -147,7 +147,7 @@ POST /my-data-stream/_update_by_query ## Delete documents in a data stream by query [delete-docs-in-a-data-stream-by-query] -Use the [delete by query API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html) to delete documents in a data stream that match a provided query: +Use the [delete by query API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-delete-by-query) to delete documents in a data stream that match a provided query: ```console POST /my-data-stream/_delete_by_query @@ -227,7 +227,7 @@ Response: 4. Primary term for the document -To update the document, use an [index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html) request with valid `if_seq_no` and `if_primary_term` arguments: +To update the document, use an [index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) request with valid `if_seq_no` and `if_primary_term` arguments: ```console PUT /.ds-my-data-stream-2099-03-08-000003/_doc/bfspvnIBr7VVZlfp2lqX?if_seq_no=0&if_primary_term=1 @@ -240,13 +240,13 @@ PUT /.ds-my-data-stream-2099-03-08-000003/_doc/bfspvnIBr7VVZlfp2lqX?if_seq_no=0& } ``` -To delete the document, use the [delete API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete.html): +To delete the document, use the [delete API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-delete): ```console DELETE /.ds-my-data-stream-2099.03.08-000003/_doc/bfspvnIBr7VVZlfp2lqX ``` -To delete or update multiple documents with a single request, use the [bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html)'s `delete`, `index`, and `update` actions. For `index` actions, include valid [`if_seq_no` and `if_primary_term`](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html#bulk-optimistic-concurrency-control) arguments. +To delete or update multiple documents with a single request, use the [bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk)'s `delete`, `index`, and `update` actions. For `index` actions, include valid [`if_seq_no` and `if_primary_term`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk#bulk-optimistic-concurrency-control) arguments. ```console PUT /_bulk?refresh diff --git a/manage-data/data-store/mapping.md b/manage-data/data-store/mapping.md index 6da8a80c0..585f11527 100644 --- a/manage-data/data-store/mapping.md +++ b/manage-data/data-store/mapping.md @@ -77,9 +77,9 @@ Use [runtime fields](/manage-data/data-store/mapping/runtime-fields.md) to make Explicit mappings should be defined at index creation for fields you know in advance. You can still add new fields to mappings at any time, as your data evolves. -Use the [Update mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) to update an existing mapping. +Use the [Update mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) to update an existing mapping. -In most cases, you can’t change mappings for fields that are already mapped. These changes require [reindexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html). +In most cases, you can’t change mappings for fields that are already mapped. These changes require [reindexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). However, you can update mappings under certain conditions: diff --git a/manage-data/data-store/mapping/dynamic-field-mapping.md b/manage-data/data-store/mapping/dynamic-field-mapping.md index 9f0c01906..0d8c88b89 100644 --- a/manage-data/data-store/mapping/dynamic-field-mapping.md +++ b/manage-data/data-store/mapping/dynamic-field-mapping.md @@ -33,7 +33,7 @@ $$$dynamic-field-mapping-types$$$ You can disable dynamic mapping, both at the document and at the [`object`](https://www.elastic.co/guide/en/elasticsearch/reference/current/object.html) level. Setting the `dynamic` parameter to `false` ignores new fields, and `strict` rejects the document if {{es}} encounters an unknown field. ::::{tip} -Use the [update mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) to update the `dynamic` setting on existing fields. +Use the [update mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) to update the `dynamic` setting on existing fields. :::: diff --git a/manage-data/data-store/mapping/dynamic-templates.md b/manage-data/data-store/mapping/dynamic-templates.md index b2cc660be..928cbba10 100644 --- a/manage-data/data-store/mapping/dynamic-templates.md +++ b/manage-data/data-store/mapping/dynamic-templates.md @@ -10,7 +10,7 @@ Dynamic templates allow you greater control over how {{es}} maps your data beyon * [`match_mapping_type` and `unmatch_mapping_type`](#match-mapping-type) operate on the data type that {{es}} detects * [`match` and `unmatch`](#match-unmatch) use a pattern to match on the field name * [`path_match` and `path_unmatch`](#path-match-unmatch) operate on the full dotted path to the field -* If a dynamic template doesn’t define `match_mapping_type`, `match`, or `path_match`, it won’t match any field. You can still refer to the template by name in `dynamic_templates` section of a [bulk request](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#bulk). +* If a dynamic template doesn’t define `match_mapping_type`, `match`, or `path_match`, it won’t match any field. You can still refer to the template by name in `dynamic_templates` section of a [bulk request](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings). Use the `{{name}}` and `{{dynamic_type}}` [template variables](#template-variables) in the mapping specification as placeholders. @@ -45,7 +45,7 @@ If a provided mapping contains an invalid mapping snippet, a validation error is * If no `match_mapping_type` has been specified but the template is valid for at least one predefined mapping type, the mapping snippet is considered valid. However, a validation error is returned at index time if a field matching the template is indexed as a different type. For example, configuring a dynamic template with no `match_mapping_type` is considered valid as string type, but if a field matching the dynamic template is indexed as a long, a validation error is returned at index time. It is recommended to configure the `match_mapping_type` to the expected JSON type or configure the desired `type` in the mapping snippet. * If the `{{name}}` placeholder is used in the mapping snippet, validation is skipped when updating the dynamic template. This is because the field name is unknown at that time. Instead, validation occurs when the template is applied at index time. -Templates are processed in order — the first matching template wins. When putting new dynamic templates through the [update mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) API, all existing templates are overwritten. This allows for dynamic templates to be reordered or deleted after they were initially added. +Templates are processed in order — the first matching template wins. When putting new dynamic templates through the [update mapping](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) API, all existing templates are overwritten. This allows for dynamic templates to be reordered or deleted after they were initially added. ## Mapping runtime fields in a dynamic template [dynamic-mapping-runtime-fields] diff --git a/manage-data/data-store/mapping/explicit-mapping.md b/manage-data/data-store/mapping/explicit-mapping.md index 9cabe3bdd..ec72d6632 100644 --- a/manage-data/data-store/mapping/explicit-mapping.md +++ b/manage-data/data-store/mapping/explicit-mapping.md @@ -12,7 +12,7 @@ You can create field mappings when you [create an index](#create-mapping) and [a ## Create an index with an explicit mapping [create-mapping] -You can use the [create index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) API to create a new index with an explicit mapping. +You can use the [create index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) API to create a new index with an explicit mapping. ```console PUT /my-index-000001 @@ -35,7 +35,7 @@ PUT /my-index-000001 ## Add a field to an existing mapping [add-field-mapping] -You can use the [update mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html) API to add one or more new fields to an existing index. +You can use the [update mapping](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-mapping) API to add one or more new fields to an existing index. The following example adds `employee-id`, a `keyword` field with an [`index`](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-index.html) mapping parameter value of `false`. This means values for the `employee-id` field are stored but not indexed or available for search. @@ -58,14 +58,14 @@ Except for supported [mapping parameters](https://www.elastic.co/guide/en/elasti If you need to change the mapping of a field in a data stream’s backing indices, see [Change mappings and settings for a data stream](../index-types/modify-data-stream.md#data-streams-change-mappings-and-settings). -If you need to change the mapping of a field in other indices, create a new index with the correct mapping and [reindex](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) your data into that index. +If you need to change the mapping of a field in other indices, create a new index with the correct mapping and [reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) your data into that index. Renaming a field would invalidate data already indexed under the old field name. Instead, add an [`alias`](https://www.elastic.co/guide/en/elasticsearch/reference/current/field-alias.html) field to create an alternate field name. ## View the mapping of an index [view-mapping] -You can use the [get mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-mapping.html) API to view the mapping of an existing index. +You can use the [get mapping](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-mapping) API to view the mapping of an existing index. ```console GET /my-index-000001/_mapping @@ -100,7 +100,7 @@ The API returns the following response: ## View the mapping of specific fields [view-field-mapping] -If you only want to view the mapping of one or more specific fields, you can use the [get field mapping](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-field-mapping.html) API. +If you only want to view the mapping of one or more specific fields, you can use the [get field mapping](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-mapping) API. This is useful if you don’t need the complete mapping of an index or your index contains a large number of fields. diff --git a/manage-data/data-store/mapping/explore-data-with-runtime-fields.md b/manage-data/data-store/mapping/explore-data-with-runtime-fields.md index adf711066..7ab9c82a8 100644 --- a/manage-data/data-store/mapping/explore-data-with-runtime-fields.md +++ b/manage-data/data-store/mapping/explore-data-with-runtime-fields.md @@ -33,7 +33,7 @@ PUT /my-index-000001/ ## Ingest some data [runtime-examples-ingest-data] -After mapping the fields you want to retrieve, index a few records from your log data into {{es}}. The following request uses the [bulk API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) to index raw log data into `my-index-000001`. Instead of indexing all of your log data, you can use a small sample to experiment with runtime fields. +After mapping the fields you want to retrieve, index a few records from your log data into {{es}}. The following request uses the [bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) to index raw log data into `my-index-000001`. Instead of indexing all of your log data, you can use a small sample to experiment with runtime fields. The final document is not a valid Apache log format, but we can account for that scenario in our script. diff --git a/manage-data/data-store/mapping/runtime-fields.md b/manage-data/data-store/mapping/runtime-fields.md index b9ed083ae..613fb7c9a 100644 --- a/manage-data/data-store/mapping/runtime-fields.md +++ b/manage-data/data-store/mapping/runtime-fields.md @@ -47,7 +47,7 @@ Runtime fields use less disk space and provide flexibility in how you access you To balance search performance and flexibility, index fields that you’ll frequently search for and filter on, such as a timestamp. {{es}} automatically uses these indexed fields first when running a query, resulting in a fast response time. You can then use runtime fields to limit the number of fields that {{es}} needs to calculate values for. Using indexed fields in tandem with runtime fields provides flexibility in the data that you index and how you define queries for other fields. -Use the [asynchronous search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/async-search.html) to run searches that include runtime fields. This method of search helps to offset the performance impacts of computing values for runtime fields in each document containing that field. If the query can’t return the result set synchronously, you’ll get results asynchronously as they become available. +Use the [asynchronous search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-async-search-submit) to run searches that include runtime fields. This method of search helps to offset the performance impacts of computing values for runtime fields in each document containing that field. If the query can’t return the result set synchronously, you’ll get results asynchronously as they become available. ::::{important} Queries against runtime fields are considered expensive. If [`search.allow_expensive_queries`](../../../explore-analyze/query-filter/languages/querydsl.md#query-dsl-allow-expensive-queries) is set to `false`, expensive queries are not allowed and {{es}} will reject any queries against runtime fields. diff --git a/manage-data/data-store/templates.md b/manage-data/data-store/templates.md index 610b5b826..01187b6b6 100644 --- a/manage-data/data-store/templates.md +++ b/manage-data/data-store/templates.md @@ -6,19 +6,19 @@ mapped_pages: # Templates [index-templates] ::::{note} -This topic describes the composable index templates introduced in {{es}} 7.8. For information about how index templates worked previously, see the [legacy template documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates-v1.html). +This topic describes the composable index templates introduced in {{es}} 7.8. For information about how index templates worked previously, see the [legacy template documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-template). :::: $$$getting$$$ An index template is a way to tell {{es}} how to configure an index when it is created. For data streams, the index template configures the stream’s backing indices as they are created. Templates are configured **prior to index creation**. When an index is created - either manually or through indexing a document - the template settings are used as a basis for creating the index. -There are two types of templates: index templates and [component templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html). Component templates are reusable building blocks that configure mappings, settings, and aliases. While you can use component templates to construct index templates, they aren’t directly applied to a set of indices. Index templates can contain a collection of component templates, as well as directly specify settings, mappings, and aliases. +There are two types of templates: index templates and [component templates](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template). Component templates are reusable building blocks that configure mappings, settings, and aliases. While you can use component templates to construct index templates, they aren’t directly applied to a set of indices. Index templates can contain a collection of component templates, as well as directly specify settings, mappings, and aliases. The following conditions apply to index templates: * Composable templates take precedence over legacy templates. If no composable template matches a given index, a legacy template may still match and be applied. -* If an index is created with explicit settings and also matches an index template, the settings from the [create index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) request take precedence over settings specified in the index template and its component templates. +* If an index is created with explicit settings and also matches an index template, the settings from the [create index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) request take precedence over settings specified in the index template and its component templates. * Settings specified in the index template itself take precedence over the settings in its component templates. * If a new data stream or index matches more than one index template, the index template with the highest priority is used. @@ -38,7 +38,7 @@ The following conditions apply to index templates: If you use {{fleet}} or {{agent}}, assign your index templates a priority lower than `100` to avoid overriding these templates. Otherwise, to avoid accidentally applying the templates, do one or more of the following: -* To disable all built-in index and component templates, set [`stack.templates.enabled`](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-management-settings.html#stack-templates-enabled) to `false` using the [cluster update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html). Note, however, that this is not recommended, see the [setting documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-management-settings.html#stack-templates-enabled) for more information. +* To disable all built-in index and component templates, set [`stack.templates.enabled`](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-management-settings.html#stack-templates-enabled) to `false` using the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Note, however, that this is not recommended, see the [setting documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-management-settings.html#stack-templates-enabled) for more information. * Use a non-overlapping index pattern. * Assign templates with an overlapping pattern a `priority` higher than `500`. For example, if you don’t use {{fleet}} or {{agent}} and want to create a template for the `logs-*` index pattern, assign your template a priority of `500`. This ensures your template is applied instead of the built-in template for `logs-*-*`. * To avoid naming collisions with built-in and Fleet-managed index templates, avoid using `@` as part of the name of your own index templates. @@ -49,7 +49,7 @@ If you use {{fleet}} or {{agent}}, assign your index templates a priority lower ## Create index template [create-index-templates] -Use the [index template](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html) and [put component template](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html) APIs to create and update index templates. You can also [manage index templates](../lifecycle/index-lifecycle-management/index-management-in-kibana.md) from Stack Management in {{kib}}. +Use the [index template](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template) and [put component template](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template) APIs to create and update index templates. You can also [manage index templates](../lifecycle/index-lifecycle-management/index-management-in-kibana.md) from Stack Management in {{kib}}. The following requests create two component templates. diff --git a/manage-data/data-store/text-analysis/specify-an-analyzer.md b/manage-data/data-store/text-analysis/specify-an-analyzer.md index 9c6fcdb46..197e3b854 100644 --- a/manage-data/data-store/text-analysis/specify-an-analyzer.md +++ b/manage-data/data-store/text-analysis/specify-an-analyzer.md @@ -17,7 +17,7 @@ The flexibility to specify analyzers at different levels and for different times In most cases, a simple approach works best: Specify an analyzer for each `text` field, as outlined in [Specify the analyzer for a field](#specify-index-field-analyzer). -This approach works well with {{es}}'s default behavior, letting you use the same analyzer for indexing and search. It also lets you quickly see which analyzer applies to which field using the [get mapping API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-mapping.html). +This approach works well with {{es}}'s default behavior, letting you use the same analyzer for indexing and search. It also lets you quickly see which analyzer applies to which field using the [get mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-mapping). If you don’t typically create mappings for your indices, you can use [index templates](../templates.md) to achieve a similar effect. @@ -38,7 +38,7 @@ If none of these parameters are specified, the [`standard` analyzer](https://www When mapping an index, you can use the [`analyzer`](https://www.elastic.co/guide/en/elasticsearch/reference/current/analyzer.html) mapping parameter to specify an analyzer for each `text` field. -The following [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) request sets the `whitespace` analyzer as the analyzer for the `title` field. +The following [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) request sets the `whitespace` analyzer as the analyzer for the `title` field. ```console PUT my-index-000001 @@ -59,7 +59,7 @@ PUT my-index-000001 In addition to a field-level analyzer, you can set a fallback analyzer for using the `analysis.analyzer.default` setting. -The following [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) request sets the `simple` analyzer as the fallback analyzer for `my-index-000001`. +The following [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) request sets the `simple` analyzer as the fallback analyzer for `my-index-000001`. ```console PUT my-index-000001 @@ -101,7 +101,7 @@ If none of these parameters are specified, the [`standard` analyzer](https://www When writing a [full-text query](https://www.elastic.co/guide/en/elasticsearch/reference/current/full-text-queries.html), you can use the `analyzer` parameter to specify a search analyzer. If provided, this overrides any other search analyzers. -The following [search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) request sets the `stop` analyzer as the search analyzer for a [`match`](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html) query. +The following [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search) request sets the `stop` analyzer as the search analyzer for a [`match`](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html) query. ```console GET my-index-000001/_search @@ -124,7 +124,7 @@ When mapping an index, you can use the [`search_analyzer`](https://www.elastic.c If a search analyzer is provided, the index analyzer must also be specified using the `analyzer` parameter. -The following [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) request sets the `simple` analyzer as the search analyzer for the `title` field. +The following [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) request sets the `simple` analyzer as the search analyzer for the `title` field. ```console PUT my-index-000001 @@ -144,11 +144,11 @@ PUT my-index-000001 ## Specify the default search analyzer for an index [specify-search-default-analyzer] -When [creating an index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html), you can set a default search analyzer using the `analysis.analyzer.default_search` setting. +When [creating an index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create), you can set a default search analyzer using the `analysis.analyzer.default_search` setting. If a search analyzer is provided, a default index analyzer must also be specified using the `analysis.analyzer.default` setting. -The following [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) request sets the `whitespace` analyzer as the default search analyzer for the `my-index-000001` index. +The following [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) request sets the `whitespace` analyzer as the default search analyzer for the `my-index-000001` index. ```console PUT my-index-000001 diff --git a/manage-data/data-store/text-analysis/test-an-analyzer.md b/manage-data/data-store/text-analysis/test-an-analyzer.md index 7cc26c421..3ad14f5f5 100644 --- a/manage-data/data-store/text-analysis/test-an-analyzer.md +++ b/manage-data/data-store/text-analysis/test-an-analyzer.md @@ -5,7 +5,7 @@ mapped_pages: # Test an analyzer [test-analyzer] -The [`analyze` API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-analyze.html) is an invaluable tool for viewing the terms produced by an analyzer. A built-in analyzer can be specified inline in the request: +The [`analyze` API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-analyze) is an invaluable tool for viewing the terms produced by an analyzer. A built-in analyzer can be specified inline in the request: ```console POST _analyze diff --git a/manage-data/ingest.md b/manage-data/ingest.md index ed2edf738..e48a513fd 100644 --- a/manage-data/ingest.md +++ b/manage-data/ingest.md @@ -24,7 +24,7 @@ You can ingest: Elastic offer tools designed to ingest specific types of general content. The content type determines the best ingest option. -* To index **documents** directly into {{es}}, use the {{es}} [document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html). +* To index **documents** directly into {{es}}, use the {{es}} [document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document). * To send **application data** directly to {{es}}, use an [{{es}} language client](https://www.elastic.co/guide/en/elasticsearch/client/index.html). * To index **web page content**, use the Elastic [web crawler](https://www.elastic.co/web-crawler). * To sync **data from third-party sources**, use [connectors](https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html). A connector syncs content from an original data source to an {{es}} index. Using connectors you can create *searchable*, read-only replicas of your data sources. diff --git a/manage-data/ingest/ingesting-data-for-elastic-solutions.md b/manage-data/ingest/ingesting-data-for-elastic-solutions.md index 20e8f8d23..75f0ceaa1 100644 --- a/manage-data/ingest/ingesting-data-for-elastic-solutions.md +++ b/manage-data/ingest/ingesting-data-for-elastic-solutions.md @@ -35,7 +35,7 @@ To use [Elastic Agent](https://www.elastic.co/guide/en/fleet/current) and [Elast * [Elastic Search for integrations](https://www.elastic.co/integrations/data-integrations?solution=search) * [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current) - * [{{es}} document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html) + * [{{es}} document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) * [{{es}} language clients](https://www.elastic.co/guide/en/elasticsearch/client/index.html) * [Elastic web crawler](https://www.elastic.co/web-crawler) * [Elastic connectors](https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html) @@ -93,7 +93,7 @@ Bring your ideas and use {{es}} and the {{stack}} to store, search, and visualiz * [Install {{agent}}](https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html) * [{{es}} Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current) - * [{{es}} document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html) + * [{{es}} document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) * [{{es}} language clients](https://www.elastic.co/guide/en/elasticsearch/client/index.html) * [Elastic web crawler](https://www.elastic.co/web-crawler) * [Elastic connectors](https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html) diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md index 6b449cbf2..eec6274cf 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md @@ -290,7 +290,7 @@ const client = new Client({ }) ``` -Check [Create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). ### Best practices [ec_best_practices] diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md index 10348c774..1895bd3e0 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md @@ -351,7 +351,7 @@ es = Elasticsearch( ) ``` -Check [Create API key API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-elasticsearch-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/examples.html). diff --git a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md index 4b873b21e..585eac0f2 100644 --- a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md +++ b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-exact-values.md @@ -9,7 +9,7 @@ mapped_pages: The following example creates a `match` enrich policy that adds user name and contact information to incoming documents based on an email address. It then adds the `match` enrich policy to a processor in an ingest pipeline. -Use the [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) or [index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html) to create a source index. +Use the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) or [index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) to create a source index. The following index API request creates a source index and indexes a new document to that index. @@ -44,13 +44,13 @@ PUT /_enrich/policy/users-policy } ``` -Use the [execute enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html) to create an enrich index for the policy. +Use the [execute enrich policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-execute-policy) to create an enrich index for the policy. ```console POST /_enrich/policy/users-policy/_execute?wait_for_completion=false ``` -Use the [create or update pipeline API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-pipeline-api.html) to create an ingest pipeline. In the pipeline, add an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) that includes: +Use the [create or update pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) to create an ingest pipeline. In the pipeline, add an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) that includes: * Your enrich policy. * The `field` of incoming documents used to match documents from the enrich index. @@ -82,7 +82,7 @@ PUT /my-index-000001/_doc/my_id?pipeline=user_lookup } ``` -To verify the enrich processor matched and appended the appropriate field data, use the [get API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html) to view the indexed document. +To verify the enrich processor matched and appended the appropriate field data, use the [get API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get) to view the indexed document. ```console GET /my-index-000001/_doc/my_id diff --git a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md index c884147dc..ddffd2333 100644 --- a/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md +++ b/manage-data/ingest/transform-enrich/example-enrich-data-based-on-geolocation.md @@ -9,7 +9,7 @@ mapped_pages: The following example creates a `geo_match` enrich policy that adds postal codes to incoming documents based on a set of coordinates. It then adds the `geo_match` enrich policy to a processor in an ingest pipeline. -Use the [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) to create a source index containing at least one `geo_shape` field. +Use the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) to create a source index containing at least one `geo_shape` field. ```console PUT /postal_codes @@ -27,7 +27,7 @@ PUT /postal_codes } ``` -Use the [index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html) to index enrich data to this source index. +Use the [index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) to index enrich data to this source index. ```console PUT /postal_codes/_doc/1?refresh=wait_for @@ -40,7 +40,7 @@ PUT /postal_codes/_doc/1?refresh=wait_for } ``` -Use the [create enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-enrich-policy-api.html) to create an enrich policy with the `geo_match` policy type. This policy must include: +Use the [create enrich policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-put-policy) to create an enrich policy with the `geo_match` policy type. This policy must include: * One or more source indices * A `match_field`, the `geo_shape` field from the source indices used to match incoming documents @@ -57,13 +57,13 @@ PUT /_enrich/policy/postal_policy } ``` -Use the [execute enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html) to create an enrich index for the policy. +Use the [execute enrich policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-execute-policy) to create an enrich index for the policy. ```console POST /_enrich/policy/postal_policy/_execute?wait_for_completion=false ``` -Use the [create or update pipeline API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-pipeline-api.html) to create an ingest pipeline. In the pipeline, add an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) that includes: +Use the [create or update pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) to create an ingest pipeline. In the pipeline, add an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) that includes: * Your enrich policy. * The `field` of incoming documents used to match the geoshape of documents from the enrich index. @@ -98,7 +98,7 @@ PUT /users/_doc/0?pipeline=postal_lookup } ``` -To verify the enrich processor matched and appended the appropriate field data, use the [get API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html) to view the indexed document. +To verify the enrich processor matched and appended the appropriate field data, use the [get API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get) to view the indexed document. ```console GET /users/_doc/0 diff --git a/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md b/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md index 57e602a0e..b7d2d6d54 100644 --- a/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md +++ b/manage-data/ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md @@ -9,7 +9,7 @@ A `range` [enrich policy](data-enrichment.md#enrich-policy) uses a [`term` query The following example creates a `range` enrich policy that adds a descriptive network name and responsible department to incoming documents based on an IP address. It then adds the enrich policy to a processor in an ingest pipeline. -Use the [create index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html) with the appropriate mappings to create a source index. +Use the [create index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create) with the appropriate mappings to create a source index. ```console PUT /networks @@ -54,13 +54,13 @@ PUT /_enrich/policy/networks-policy } ``` -Use the [execute enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html) to create an enrich index for the policy. +Use the [execute enrich policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-execute-policy) to create an enrich index for the policy. ```console POST /_enrich/policy/networks-policy/_execute?wait_for_completion=false ``` -Use the [create or update pipeline API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-pipeline-api.html) to create an ingest pipeline. In the pipeline, add an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) that includes: +Use the [create or update pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) to create an ingest pipeline. In the pipeline, add an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) that includes: * Your enrich policy. * The `field` of incoming documents used to match documents from the enrich index. @@ -92,7 +92,7 @@ PUT /my-index-000001/_doc/my_id?pipeline=networks_lookup } ``` -To verify the enrich processor matched and appended the appropriate field data, use the [get API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html) to view the indexed document. +To verify the enrich processor matched and appended the appropriate field data, use the [get API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get) to view the indexed document. ```console GET /my-index-000001/_doc/my_id diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines.md b/manage-data/ingest/transform-enrich/ingest-pipelines.md index 1809e7f4c..a3c45e587 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines.md @@ -13,7 +13,7 @@ A pipeline consists of a series of configurable tasks called [processors](https: :alt: Ingest pipeline diagram ::: -You can create and manage ingest pipelines using {{kib}}'s **Ingest Pipelines** feature or the [ingest APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-apis.html). {{es}} stores pipelines in the [cluster state](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-state.html). +You can create and manage ingest pipelines using {{kib}}'s **Ingest Pipelines** feature or the [ingest APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest). {{es}} stores pipelines in the [cluster state](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-state). :::{note} To run an {{es}} pipeline in {{serverless-full}}, refer to [{{es}} Ingest pipelines (Serverless)](./ingest-pipelines-serverless.md). @@ -46,7 +46,7 @@ The **New pipeline from CSV** option lets you use a CSV to create an ingest pipe :::: -You can also use the [ingest APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest-apis.html) to create and manage pipelines. The following [create pipeline API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-pipeline-api.html) request creates a pipeline containing two [`set`](https://www.elastic.co/guide/en/elasticsearch/reference/current/set-processor.html) processors followed by a [`lowercase`](https://www.elastic.co/guide/en/elasticsearch/reference/current/lowercase-processor.html) processor. The processors run sequentially in the order specified. +You can also use the [ingest APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-ingest) to create and manage pipelines. The following [create pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) request creates a pipeline containing two [`set`](https://www.elastic.co/guide/en/elasticsearch/reference/current/set-processor.html) processors followed by a [`lowercase`](https://www.elastic.co/guide/en/elasticsearch/reference/current/lowercase-processor.html) processor. The processors run sequentially in the order specified. ```console PUT _ingest/pipeline/my-pipeline @@ -79,7 +79,7 @@ PUT _ingest/pipeline/my-pipeline ## Manage pipeline versions [manage-pipeline-versions] -When you create or update a pipeline, you can specify an optional `version` integer. You can use this version number with the [`if_version`](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-pipeline-api.html#put-pipeline-api-query-params) parameter to conditionally update the pipeline. When the `if_version` parameter is specified, a successful update increments the pipeline’s version. +When you create or update a pipeline, you can specify an optional `version` integer. You can use this version number with the [`if_version`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline#put-pipeline-api-query-params) parameter to conditionally update the pipeline. When the `if_version` parameter is specified, a successful update increments the pipeline’s version. ```console PUT _ingest/pipeline/my-pipeline-id @@ -101,7 +101,7 @@ Before using a pipeline in production, we recommend you test it using sample doc :class: screenshot ::: -You can also test pipelines using the [simulate pipeline API](https://www.elastic.co/guide/en/elasticsearch/reference/current/simulate-pipeline-api.html). You can specify a configured pipeline in the request path. For example, the following request tests `my-pipeline`. +You can also test pipelines using the [simulate pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-simulate). You can specify a configured pipeline in the request path. For example, the following request tests `my-pipeline`. ```console POST _ingest/pipeline/my-pipeline/_simulate @@ -188,7 +188,7 @@ The API returns transformed documents: ## Add a pipeline to an indexing request [add-pipeline-to-indexing-request] -Use the `pipeline` query parameter to apply a pipeline to documents in [individual](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html) or [bulk](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html) indexing requests. +Use the `pipeline` query parameter to apply a pipeline to documents in [individual](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) or [bulk](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) indexing requests. ```console POST my-data-stream/_doc?pipeline=my-pipeline @@ -204,7 +204,7 @@ PUT my-data-stream/_bulk?pipeline=my-pipeline { "@timestamp": "2099-03-07T11:04:07.000Z", "my-keyword-field": "bar" } ``` -You can also use the `pipeline` parameter with the [update by query](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html) or [reindex](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) APIs. +You can also use the `pipeline` parameter with the [update by query](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-update-by-query) or [reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) APIs. ```console POST my-data-stream/_update_by_query?pipeline=my-pipeline @@ -269,7 +269,7 @@ $$$pipeline-custom-logs-index-template$$$ 2. Create an [index template](../../data-store/templates.md) that includes your pipeline in the [`index.default_pipeline`](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-default-pipeline) or [`index.final_pipeline`](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-final-pipeline) index setting. Ensure the template is [data stream enabled](../../data-store/index-types/set-up-data-stream.md#create-index-template). The template’s index pattern should match `logs--*`. - You can create this template using {{kib}}'s [**Index Management**](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-index-templates) feature or the [create index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html). + You can create this template using {{kib}}'s [**Index Management**](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-index-templates) feature or the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template). For example, the following request creates a template matching `logs-my_app-*`. The template uses a component template that contains the `index.default_pipeline` index setting. @@ -305,7 +305,7 @@ $$$pipeline-custom-logs-index-template$$$ :class: screenshot ::: -5. Use the [rollover API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html) to roll over your data stream. This ensures {{es}} applies the index template and its pipeline settings to any new data for the integration. +5. Use the [rollover API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-rollover) to roll over your data stream. This ensures {{es}} applies the index template and its pipeline settings to any new data for the integration. ```console POST logs-my_app-default/_rollover/ @@ -477,7 +477,7 @@ PUT _ingest/pipeline/my-pipeline The set processor above tells ES to use the dynamic template named `geo_point` for the field `address` if this field is not defined in the mapping of the index yet. This processor overrides the dynamic template for the field `address` if already defined in the bulk request, but has no effect on other dynamic templates defined in the bulk request. ::::{warning} -If you [automatically generate](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#create-document-ids-automatically) document IDs, you cannot use `{{{_id}}}` in a processor. {{es}} assigns auto-generated `_id` values after ingest. +If you [automatically generate](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create#create-document-ids-automatically) document IDs, you cannot use `{{{_id}}}` in a processor. {{es}} assigns auto-generated `_id` values after ingest. :::: @@ -799,7 +799,7 @@ PUT _ingest/pipeline/one-pipeline-to-rule-them-all ## Get pipeline usage statistics [get-pipeline-usage-stats] -Use the [node stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html) API to get global and per-pipeline ingest statistics. Use these stats to determine which pipelines run most frequently or spend the most time processing. +Use the [node stats](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-stats) API to get global and per-pipeline ingest statistics. Use these stats to determine which pipelines run most frequently or spend the most time processing. ```console GET _nodes/stats/ingest?filter_path=nodes.*.ingest diff --git a/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md b/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md index 308d0be41..45cf9b7d7 100644 --- a/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md +++ b/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md @@ -38,14 +38,14 @@ To use enrich policies, you must have: To begin, add documents to one or more source indices. These documents should contain the enrich data you eventually want to add to incoming data. -You can manage source indices just like regular {{es}} indices using the [document](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html) and [index](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html) APIs. +You can manage source indices just like regular {{es}} indices using the [document](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) and [index](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-indices) APIs. You also can set up [{{beats}}](https://www.elastic.co/guide/en/beats/libbeat/current/getting-started.html), such as a [{{filebeat}}](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html), to automatically send and index documents to your source indices. See [Getting started with {{beats}}](https://www.elastic.co/guide/en/beats/libbeat/current/getting-started.html). ## Create an enrich policy [create-enrich-policy] -After adding enrich data to your source indices, use the [create enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-enrich-policy-api.html) or [Index Management in {{kib}}](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-enrich-policies) to create an enrich policy. +After adding enrich data to your source indices, use the [create enrich policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-put-policy) or [Index Management in {{kib}}](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-enrich-policies) to create an enrich policy. ::::{warning} Once created, you can’t update or change an enrich policy. See [Update an enrich policy](#update-enrich-policies). @@ -56,13 +56,13 @@ Once created, you can’t update or change an enrich policy. See [Update an enri ## Execute the enrich policy [execute-enrich-policy] -Once the enrich policy is created, you need to execute it using the [execute enrich policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html) or [Index Management in {{kib}}](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-enrich-policies) to create an [enrich index](data-enrichment.md#enrich-index). +Once the enrich policy is created, you need to execute it using the [execute enrich policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-execute-policy) or [Index Management in {{kib}}](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-enrich-policies) to create an [enrich index](data-enrichment.md#enrich-index). :::{image} ../../../images/elasticsearch-reference-enrich-policy-index.svg :alt: enrich policy index ::: -The *enrich index* contains documents from the policy’s source indices. Enrich indices always begin with `.enrich-*`, are read-only, and are [force merged](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html). +The *enrich index* contains documents from the policy’s source indices. Enrich indices always begin with `.enrich-*`, are read-only, and are [force merged](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge). ::::{warning} Enrich indices should only be used by the [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) or the [{{esql}} `ENRICH` command](https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-commands.html#esql-enrich). Avoid using enrich indices for other purposes. @@ -79,7 +79,7 @@ Once you have source indices, an enrich policy, and the related enrich index in :alt: enrich processor ::: -Define an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) and add it to an ingest pipeline using the [create or update pipeline API](https://www.elastic.co/guide/en/elasticsearch/reference/current/put-pipeline-api.html). +Define an [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/enrich-processor.html) and add it to an ingest pipeline using the [create or update pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline). When defining the enrich processor, you must include at least the following: @@ -102,28 +102,28 @@ You can now use your ingest pipeline to enrich and index documents. :alt: enrich process ::: -Before implementing the pipeline in production, we recommend indexing a few test documents first and verifying enrich data was added correctly using the [get API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html). +Before implementing the pipeline in production, we recommend indexing a few test documents first and verifying enrich data was added correctly using the [get API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get). ## Update an enrich index [update-enrich-data] -Once created, you cannot update or index documents to an enrich index. Instead, update your source indices and [execute](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html) the enrich policy again. This creates a new enrich index from your updated source indices. The previous enrich index will be deleted with a delayed maintenance job that executes by default every 15 minutes. +Once created, you cannot update or index documents to an enrich index. Instead, update your source indices and [execute](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-execute-policy) the enrich policy again. This creates a new enrich index from your updated source indices. The previous enrich index will be deleted with a delayed maintenance job that executes by default every 15 minutes. -If wanted, you can [reindex](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html) or [update](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html) any already ingested documents using your ingest pipeline. +If wanted, you can [reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) or [update](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-update-by-query) any already ingested documents using your ingest pipeline. ## Update an enrich policy [update-enrich-policies] Once created, you can’t update or change an enrich policy. Instead, you can: -1. Create and [execute](https://www.elastic.co/guide/en/elasticsearch/reference/current/execute-enrich-policy-api.html) a new enrich policy. +1. Create and [execute](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-execute-policy) a new enrich policy. 2. Replace the previous enrich policy with the new enrich policy in any in-use enrich processors or {{esql}} queries. -3. Use the [delete enrich policy](https://www.elastic.co/guide/en/elasticsearch/reference/current/delete-enrich-policy-api.html) API or [Index Management in {{kib}}](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-enrich-policies) to delete the previous enrich policy. +3. Use the [delete enrich policy](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-enrich-delete-policy) API or [Index Management in {{kib}}](../../lifecycle/index-lifecycle-management/index-management-in-kibana.md#manage-enrich-policies) to delete the previous enrich policy. ## Enrich components [ingest-enrich-components] -The enrich coordinator is a component that manages and performs the searches required to enrich documents on each ingest node. It combines searches from all enrich processors in all pipelines into bulk [multi-searches](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html). +The enrich coordinator is a component that manages and performs the searches required to enrich documents on each ingest node. It combines searches from all enrich processors in all pipelines into bulk [multi-searches](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch). The enrich policy executor is a component that manages the executions of all enrich policies. When an enrich policy is executed, this component creates a new enrich index and removes the previous enrich index. The enrich policy executions are managed from the elected master node. The execution of these policies occurs on a different node. @@ -138,10 +138,10 @@ The enrich coordinator supports the following node settings: : Maximum size of the cache that caches searches for enriching documents. The size can be specified in three units: the raw number of cached searches (e.g. `1000`), an absolute size in bytes (e.g. `100Mb`), or a percentage of the max heap space of the node (e.g. `1%`). Both for the absolute byte size and the percentage of heap space, {{es}} does not guarantee that the enrich cache size will adhere exactly to that maximum, as {{es}} uses the byte size of the serialized search response which is is a good representation of the used space on the heap, but not an exact match. Defaults to `1%`. There is a single cache for all enrich processors in the cluster. `enrich.coordinator_proxy.max_concurrent_requests` -: Maximum number of concurrent [multi-search requests](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) to run when enriching documents. Defaults to `8`. +: Maximum number of concurrent [multi-search requests](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) to run when enriching documents. Defaults to `8`. `enrich.coordinator_proxy.max_lookups_per_request` -: Maximum number of searches to include in a [multi-search request](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) when enriching documents. Defaults to `128`. +: Maximum number of searches to include in a [multi-search request](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) when enriching documents. Defaults to `128`. The enrich policy executor supports the following node settings: @@ -149,7 +149,7 @@ The enrich policy executor supports the following node settings: : Maximum batch size when reindexing a source index into an enrich index. Defaults to `10000`. `enrich.max_force_merge_attempts` -: Maximum number of [force merge](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html) attempts allowed on an enrich index. Defaults to `3`. +: Maximum number of [force merge](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge) attempts allowed on an enrich index. Defaults to `3`. `enrich.cleanup_period` : How often {{es}} checks whether unused enrich indices can be deleted. Defaults to `15m`. diff --git a/manage-data/lifecycle/data-stream.md b/manage-data/lifecycle/data-stream.md index df5c94adb..4d2854d4a 100644 --- a/manage-data/lifecycle/data-stream.md +++ b/manage-data/lifecycle/data-stream.md @@ -15,7 +15,7 @@ To achieve that, it supports: * Automatic [rollover](index-lifecycle-management/rollover.md), which chunks your incoming data in smaller pieces to facilitate better performance and backwards incompatible mapping changes. * Configurable retention, which allows you to configure the time period for which your data is guaranteed to be stored. {{es}} is allowed at a later time to delete data older than this time period. Retention can be configured on the data stream level or on a global level. Read more about the different options in this [tutorial](data-stream/tutorial-data-stream-retention.md). -A data stream lifecycle also supports downsampling the data stream backing indices. See [the downsampling example](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html#data-streams-put-lifecycle-downsampling-example) for more details. +A data stream lifecycle also supports downsampling the data stream backing indices. See [the downsampling example](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle) for more details. ## How does it work? [data-streams-lifecycle-how-it-works] @@ -25,7 +25,7 @@ In intervals configured by [`data_streams.lifecycle.poll_interval`](https://www. 1. Checks if the data stream has a data stream lifecycle configured, skipping any indices not part of a managed data stream. 2. Rolls over the write index of the data stream, if it fulfills the conditions defined by [`cluster.lifecycle.default.rollover`](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#cluster-lifecycle-default-rollover). 3. After an index is not the write index anymore (i.e. the data stream has been rolled over), automatically tail merges the index. Data stream lifecycle executes a merge operation that only targets the long tail of small segments instead of the whole shard. As the segments are organised into tiers of exponential sizes, merging the long tail of small segments is only a fraction of the cost of force merging to a single segment. The small segments would usually hold the most recent data so tail merging will focus the merging resources on the higher-value data that is most likely to keep being queried. -4. If [downsampling](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html#data-streams-put-lifecycle-downsampling-example) is configured it will execute all the configured downsampling rounds. +4. If [downsampling](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle) is configured it will execute all the configured downsampling rounds. 5. Applies retention to the remaining backing indices. This means deleting the backing indices whose `generation_time` is longer than the effective retention period (read more about the [effective retention calculation](data-stream/tutorial-data-stream-retention.md#effective-retention-calculation)). The `generation_time` is only applicable to rolled over backing indices and it is either the time since the backing index got rolled over, or the time optionally configured in the [`index.lifecycle.origination_date`](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#index-data-stream-lifecycle-origination-date) setting. ::::{important} @@ -46,7 +46,7 @@ Since the lifecycle is configured on the data stream level, the process to confi In the following sections, we will go through the following tutorials: * To create a new data stream with a lifecycle, you need to add the data stream lifecycle as part of the index template that matches the name of your data stream (see [Tutorial: Create a data stream with a lifecycle](data-stream/tutorial-create-data-stream-with-lifecycle.md)). When a write operation with the name of your data stream reaches {{es}} then the data stream will be created with the respective data stream lifecycle. -* To update the lifecycle of an existing data stream you need to use the [data stream lifecycle APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-apis.html#data-stream-lifecycle-api) to edit the lifecycle on the data stream itself (see [Tutorial: Update existing data stream](data-stream/tutorial-update-existing-data-stream.md)). +* To update the lifecycle of an existing data stream you need to use the [data stream lifecycle APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-data-stream) to edit the lifecycle on the data stream itself (see [Tutorial: Update existing data stream](data-stream/tutorial-update-existing-data-stream.md)). * Migrate an existing {{ilm-init}} managed data stream to Data stream lifecycle using [Tutorial: Migrate ILM managed data stream to data stream lifecycle](data-stream/tutorial-migrate-ilm-managed-data-stream-to-data-stream-lifecycle.md). ::::{note} diff --git a/manage-data/lifecycle/data-stream/tutorial-create-data-stream-with-lifecycle.md b/manage-data/lifecycle/data-stream/tutorial-create-data-stream-with-lifecycle.md index e35412d3a..9dfdf8b22 100644 --- a/manage-data/lifecycle/data-stream/tutorial-create-data-stream-with-lifecycle.md +++ b/manage-data/lifecycle/data-stream/tutorial-create-data-stream-with-lifecycle.md @@ -20,7 +20,7 @@ A data stream requires a matching [index template](../../data-store/templates.md * Define the lifecycle in the template section or include a composable template that defines the lifecycle. * Use a priority higher than `200` to avoid collisions with built-in templates. See [Avoid index pattern collisions](../../data-store/templates.md#avoid-index-pattern-collisions). -You can use the [create index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html). +You can use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template). ```console PUT _index_template/my-index-template @@ -44,7 +44,7 @@ PUT _index_template/my-index-template You can create a data stream in two ways: -1. By manually creating the stream using the [create data stream API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-data-stream.html). The stream’s name must still match one of your template’s index patterns. +1. By manually creating the stream using the [create data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create-data-stream). The stream’s name must still match one of your template’s index patterns. ```console PUT _data_stream/my-data-stream @@ -64,7 +64,7 @@ You can create a data stream in two ways: ## Retrieve lifecycle information [retrieve-lifecycle-information] -You can use the [get data stream lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-get-lifecycle.html) to see the data stream lifecycle of your data stream and the [explain data stream lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-explain-lifecycle.html) to see the exact state of each backing index. +You can use the [get data stream lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-lifecycle) to see the data stream lifecycle of your data stream and the [explain data stream lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-explain-data-lifecycle) to see the exact state of each backing index. ```console GET _data_stream/my-data-stream/_lifecycle @@ -95,7 +95,7 @@ The result will look like this: 4. The retention period that will be applied by the data stream lifecycle. This means that the data in this data stream will be kept at least for 7 days. After that {{es}} can delete it at its own discretion. -If you want to see more information about how the data stream lifecycle is applied on individual backing indices use the [explain data stream lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-explain-lifecycle.html): +If you want to see more information about how the data stream lifecycle is applied on individual backing indices use the [explain data stream lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-explain-data-lifecycle): ```console GET .ds-my-data-stream-*/_lifecycle/explain diff --git a/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md b/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md index 2e24456a4..241e4e456 100644 --- a/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md +++ b/manage-data/lifecycle/data-stream/tutorial-data-stream-retention.md @@ -12,7 +12,7 @@ In this tutorial, we are going to go over the data stream lifecycle retention; w 3. [How is the effective retention calculated?](#effective-retention-calculation) 4. [How is the effective retention applied?](#effective-retention-application) -You can verify if a data steam is managed by the data stream lifecycle via the [get data stream lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-get-lifecycle.html): +You can verify if a data steam is managed by the data stream lifecycle via the [get data stream lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-lifecycle): ```console GET _data_stream/my-data-stream/_lifecycle @@ -49,9 +49,9 @@ Retention does not define the period that the data will be removed, but the mini We define 4 different types of retention: -* The data stream retention, or `data_retention`, which is the retention configured on the data stream level. It can be set via an [index template](../../data-store/templates.md) for future data streams or via the [PUT data stream lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html) for an existing data stream. When the data stream retention is not set, it implies that the data need to be kept forever. -* The global default retention, let’s call it `default_retention`, which is a retention configured via the cluster setting [`data_streams.lifecycle.retention.default`](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#data-streams-lifecycle-retention-default) and will be applied to all data streams managed by data stream lifecycle that do not have `data_retention` configured. Effectively, it ensures that there will be no data streams keeping their data forever. This can be set via the [update cluster settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html). -* The global max retention, let’s call it `max_retention`, which is a retention configured via the cluster setting [`data_streams.lifecycle.retention.max`](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#data-streams-lifecycle-retention-max) and will be applied to all data streams managed by data stream lifecycle. Effectively, it ensures that there will be no data streams whose retention will exceed this time period. This can be set via the [update cluster settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html). +* The data stream retention, or `data_retention`, which is the retention configured on the data stream level. It can be set via an [index template](../../data-store/templates.md) for future data streams or via the [PUT data stream lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle) for an existing data stream. When the data stream retention is not set, it implies that the data need to be kept forever. +* The global default retention, let’s call it `default_retention`, which is a retention configured via the cluster setting [`data_streams.lifecycle.retention.default`](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#data-streams-lifecycle-retention-default) and will be applied to all data streams managed by data stream lifecycle that do not have `data_retention` configured. Effectively, it ensures that there will be no data streams keeping their data forever. This can be set via the [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). +* The global max retention, let’s call it `max_retention`, which is a retention configured via the cluster setting [`data_streams.lifecycle.retention.max`](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#data-streams-lifecycle-retention-max) and will be applied to all data streams managed by data stream lifecycle. Effectively, it ensures that there will be no data streams whose retention will exceed this time period. This can be set via the [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). * The effective retention, or `effective_retention`, which is the retention applied at a data stream on a given moment. Effective retention cannot be set, it is derived by taking into account all the configured retention listed above and is calculated as it is described [here](#effective-retention-calculation). ::::{note} @@ -64,7 +64,7 @@ Global default and max retention do not apply to data streams internal to elasti * By setting the `data_retention` on the data stream level. This retention can be configured in two ways: -  — For new data streams, it can be defined in the index template that would be applied during the data stream’s creation. You can use the [create index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html), for example: +  — For new data streams, it can be defined in the index template that would be applied during the data stream’s creation. You can use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template), for example: ```console PUT _index_template/template @@ -83,7 +83,7 @@ Global default and max retention do not apply to data streams internal to elasti } ``` -  — For an existing data stream, it can be set via the [PUT lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html). +  — For an existing data stream, it can be set via the [PUT lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle). ```console PUT _data_stream/my-data-stream/_lifecycle @@ -94,7 +94,7 @@ Global default and max retention do not apply to data streams internal to elasti 1. The retention period of this data stream is set to 30 days. -* By setting the global retention via the `data_streams.lifecycle.retention.default` and/or `data_streams.lifecycle.retention.max` that are set on a cluster level. You can be set via the [update cluster settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html). For example: +* By setting the global retention via the `data_streams.lifecycle.retention.default` and/or `data_streams.lifecycle.retention.max` that are set on a cluster level. You can be set via the [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). For example: ```console PUT /_cluster/settings diff --git a/manage-data/lifecycle/data-stream/tutorial-migrate-ilm-managed-data-stream-to-data-stream-lifecycle.md b/manage-data/lifecycle/data-stream/tutorial-migrate-ilm-managed-data-stream-to-data-stream-lifecycle.md index 90d9f6514..2edc6b1a2 100644 --- a/manage-data/lifecycle/data-stream/tutorial-migrate-ilm-managed-data-stream-to-data-stream-lifecycle.md +++ b/manage-data/lifecycle/data-stream/tutorial-migrate-ilm-managed-data-stream-to-data-stream-lifecycle.md @@ -13,7 +13,7 @@ In this tutorial we’ll look at migrating an existing data stream from [Index L To migrate a data stream from {{ilm-init}} to data stream lifecycle we’ll have to execute two steps: 1. Update the index template that’s backing the data stream to set [prefer_ilm](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#index-lifecycle-prefer-ilm) to `false`, and to configure data stream lifecycle. -2. Configure the data stream lifecycle for the *existing* data stream using the [lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html). +2. Configure the data stream lifecycle for the *existing* data stream using the [lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle). For more details see the [migrate to data stream lifecycle](#migrate-from-ilm-to-dsl) section. @@ -75,7 +75,7 @@ POST dsl-data-stream/_doc? POST dsl-data-stream/_rollover ``` -We’ll use the [GET _data_stream](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-data-stream.html) API to inspect the state of the data stream: +We’ll use the [GET _data_stream](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream) API to inspect the state of the data stream: ```console GET _data_stream/dsl-data-stream @@ -138,7 +138,7 @@ Inspecting the response we’ll see that both backing indices are managed by {{i To migrate the `dsl-data-stream` to data stream lifecycle we’ll have to execute two steps: 1. Update the index template that’s backing the data stream to set [prefer_ilm](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle-settings.html#index-lifecycle-prefer-ilm) to `false`, and to configure data stream lifecycle. -2. Configure the data stream lifecycle for the *existing* `dsl-data-stream` using the [lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html). +2. Configure the data stream lifecycle for the *existing* `dsl-data-stream` using the [lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle). ::::{important} The data stream lifecycle configuration that’s added to the index template, being a data stream configuration, will only apply to **new** data streams. Our data stream exists already, so even though we added a data stream lifecycle configuration in the index template it will not be applied to `dsl-data-stream`. @@ -315,7 +315,7 @@ We can easily change this data stream to be managed by {{ilm-init}} because we d We can achieve this in two ways: -1. [Delete the lifecycle](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-delete-lifecycle.html) from the data streams +1. [Delete the lifecycle](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-lifecycle) from the data streams 2. Disable data stream lifecycle by configuring the `enabled` flag to `false`. Let’s implement option 2 and disable the data stream lifecycle: diff --git a/manage-data/lifecycle/data-stream/tutorial-update-existing-data-stream.md b/manage-data/lifecycle/data-stream/tutorial-update-existing-data-stream.md index 59e211386..813e3f235 100644 --- a/manage-data/lifecycle/data-stream/tutorial-update-existing-data-stream.md +++ b/manage-data/lifecycle/data-stream/tutorial-update-existing-data-stream.md @@ -13,7 +13,7 @@ To update the lifecycle of an existing data stream you do the following actions: ## Set a data stream’s lifecycle [set-lifecycle] -To add or to change the retention period of your data stream you can use the [PUT lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-put-lifecycle.html). +To add or to change the retention period of your data stream you can use the [PUT lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle). * You can set infinite retention period, meaning that your data should never be deleted. For example: @@ -36,7 +36,7 @@ To add or to change the retention period of your data stream you can use the [PU 1. The retention period of this data stream is set to 30 days. This means that {{es}} is allowed to delete data that is older than 30 days at its own discretion. -The changes in the lifecycle are applied on all backing indices of the data stream. You can see the effect of the change via the [explain API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-explain-lifecycle.html): +The changes in the lifecycle are applied on all backing indices of the data stream. You can see the effect of the change via the [explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-explain-data-lifecycle): ```console GET .ds-my-data-stream-*/_lifecycle/explain @@ -89,13 +89,13 @@ The response will look like: ## Remove lifecycle for a data stream [delete-lifecycle] -To remove the lifecycle of a data stream you can use the [delete lifecycle API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-delete-lifecycle.html#data-streams-delete-lifecycle-request). As consequence, the maintenance operations that were applied by the lifecycle will no longer be applied to the data stream and all its backing indices. For example: +To remove the lifecycle of a data stream you can use the [delete lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-lifecycle). As consequence, the maintenance operations that were applied by the lifecycle will no longer be applied to the data stream and all its backing indices. For example: ```console DELETE _data_stream/my-data-stream/_lifecycle ``` -You can then use the [explain API](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-streams-explain-lifecycle.html) again to see that the indices are no longer managed. +You can then use the [explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-explain-data-lifecycle) again to see that the indices are no longer managed. ```console GET .ds-my-data-stream-*/_lifecycle/explain diff --git a/manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md b/manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md index 09117f1e6..cda8a9927 100644 --- a/manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md +++ b/manage-data/lifecycle/index-lifecycle-management/configure-lifecycle-policy.md @@ -27,7 +27,7 @@ To create a lifecycle policy from {{kib}}, open the menu and go to **Stack Manag You specify the lifecycle phases for the policy and the actions to perform in each phase. -The [create or update policy](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-put-lifecycle.html) API is invoked to add the policy to the {{es}} cluster. +The [create or update policy](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle) API is invoked to add the policy to the {{es}} cluster. ::::{dropdown} API example ```console @@ -79,7 +79,7 @@ You can use the {{kib}} Create template wizard to create a template. To access t ![Create template page](../../../images/elasticsearch-reference-create-template-wizard-my_template.png "") -The wizard invokes the [create or update index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html) to add templates to a cluster. +The wizard invokes the [create or update index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template) to add templates to a cluster. ::::{dropdown} API example ```console @@ -138,7 +138,7 @@ Now you can start indexing data to the rollover alias specified in the lifecycle ## Apply lifecycle policy manually [apply-policy-manually] -You can specify a policy when you create an index or apply a policy to an existing index through {{kib}} Management or the [update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html). When you apply a policy, {{ilm-init}} immediately starts managing the index. +You can specify a policy when you create an index or apply a policy to an existing index through {{kib}} Management or the [update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings). When you apply a policy, {{ilm-init}} immediately starts managing the index. ::::{important} Do not manually apply a policy that uses the rollover action. Policies that use rollover must be applied by the [index template](#apply-policy-template). Otherwise, the policy is not carried forward when the rollover action creates a new index. @@ -168,7 +168,7 @@ PUT test-index ### Apply a policy to multiple indices [apply-policy-multiple] -You can apply the same policy to multiple indices by using wildcards in the index name when you call the [update settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) API. +You can apply the same policy to multiple indices by using wildcards in the index name when you call the [update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) API. ::::{warning} Be careful that you don’t inadvertently match indices that you don’t want to modify. @@ -194,7 +194,7 @@ PUT mylogs-pre-ilm*/_settings <1> To switch an index’s lifecycle policy, follow these steps: -1. Remove the existing policy using the [remove policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-remove-policy.html). Target a data stream or alias to remove the policies of all its indices. +1. Remove the existing policy using the [remove policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-remove-policy). Target a data stream or alias to remove the policies of all its indices. ```console POST logs-my_app-default/_ilm/remove @@ -204,19 +204,19 @@ To switch an index’s lifecycle policy, follow these steps: For example, the [`forcemerge`](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-forcemerge.html) action temporarily closes an index before reopening it. Removing an index’s {{ilm-init}} policy during a `forcemerge` can leave the index closed indefinitely. - After policy removal, use the [get index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-index.html) to check an index’s state . Target a data stream or alias to get the state of all its indices. + After policy removal, use the [get index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get) to check an index’s state . Target a data stream or alias to get the state of all its indices. ```console GET logs-my_app-default ``` - You can then change the index as needed. For example, you can re-open any closed indices using the [open index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-open-close.html). + You can then change the index as needed. For example, you can re-open any closed indices using the [open index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-open). ```console POST logs-my_app-default/_open ``` -3. Assign a new policy using the [update settings API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html). Target a data stream or alias to assign a policy to all its indices. +3. Assign a new policy using the [update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings). Target a data stream or alias to assign a policy to all its indices. ::::{warning} Don’t assign a new policy without first removing the existing policy. This can cause [phase execution](index-lifecycle.md#ilm-phase-execution) to silently fail. diff --git a/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md b/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md index f9b3de75c..3342324c5 100644 --- a/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md +++ b/manage-data/lifecycle/index-lifecycle-management/index-lifecycle.md @@ -36,7 +36,7 @@ If an index has been [rolled over](https://www.elastic.co/guide/en/elasticsearch :::: -If an index has unallocated shards and the [cluster health status](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html) is yellow, the index can still transition to the next phase according to its {{ilm}} policy. However, because {{es}} can only perform certain clean up tasks on a green cluster, there might be unexpected side effects. +If an index has unallocated shards and the [cluster health status](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-health) is yellow, the index can still transition to the next phase according to its {{ilm}} policy. However, because {{es}} can only perform certain clean up tasks on a green cluster, there might be unexpected side effects. To avoid increased disk usage and reliability issues, address any cluster health problems in a timely fashion. diff --git a/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md b/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md index 203f07cfb..989e55971 100644 --- a/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md +++ b/manage-data/lifecycle/index-lifecycle-management/index-management-in-kibana.md @@ -18,7 +18,7 @@ If you use {{es}} {security-features}, the following [security privileges](../.. * The `view_index_metadata` and `manage` index privileges to view a data stream or index’s data. * The `manage_index_templates` cluster privilege to manage index templates. -To add these privileges, go to **Stack Management > Security > Roles** or use the [Create or update roles API](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-put-role.html). +To add these privileges, go to **Stack Management > Security > Roles** or use the [Create or update roles API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role). ## Manage indices [view-edit-indices] @@ -30,8 +30,8 @@ Investigate your indices and perform operations from the **Indices** view. :class: screenshot ::: -* To show details and perform operations such as close, forcemerge, and flush, click the index name. To perform operations on multiple indices, select their checkboxes and then open the **Manage** menu. For more information on managing indices, refer to [Index APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html). -* To filter the list of indices, use the search bar or click a badge. Badges indicate if an index is a [follower index](https://www.elastic.co/guide/en/elasticsearch/reference/current/ccr-put-follow.html), a [rollup index](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-get-rollup-index-caps.html), or [frozen](https://www.elastic.co/guide/en/elasticsearch/reference/current/unfreeze-index-api.html). +* To show details and perform operations such as close, forcemerge, and flush, click the index name. To perform operations on multiple indices, select their checkboxes and then open the **Manage** menu. For more information on managing indices, refer to [Index APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-indices). +* To filter the list of indices, use the search bar or click a badge. Badges indicate if an index is a [follower index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ccr-follow), a [rollup index](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-get-rollup-index-caps), or [frozen](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-unfreeze). * To drill down into the index [mappings](../../data-store/mapping.md), [settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html#index-modules-settings), and statistics, click an index name. From this view, you can navigate to **Discover** to further explore the documents in the index. :::{image} ../../../images/elasticsearch-reference-management_index_details.png @@ -87,7 +87,7 @@ In this tutorial, you’ll create an index template and use it to configure two **Step 2. Add settings, mappings, and aliases** -1. Add [component templates](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html) to your index template. +1. Add [component templates](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template) to your index template. Component templates are pre-configured sets of mappings, index settings, and aliases you can reuse across multiple index templates. Badges indicate whether a component template contains mappings (**M**), index settings (**S**), aliases (**A**), or a combination of the three. @@ -173,7 +173,7 @@ You’re now ready to create new indices using your index template. } ``` -2. Use the [get index API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-index.html) to view the configurations for the new indices. The indices were configured using the index template you created earlier. +2. Use the [get index API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get) to view the configurations for the new indices. The indices were configured using the index template you created earlier. ```console GET /my-index-000001,my-index-000002 diff --git a/manage-data/lifecycle/index-lifecycle-management/manage-existing-indices.md b/manage-data/lifecycle/index-lifecycle-management/manage-existing-indices.md index 8fe18b8d4..726eef576 100644 --- a/manage-data/lifecycle/index-lifecycle-management/manage-existing-indices.md +++ b/manage-data/lifecycle/index-lifecycle-management/manage-existing-indices.md @@ -10,13 +10,13 @@ If you’ve been using Curator or some other mechanism to manage periodic indice * Set up your index templates to use an {{ilm-init}} policy to manage your new indices. Once {{ilm-init}} is managing your current write index, you can apply an appropriate policy to your old indices. * Reindex into an {{ilm-init}}-managed index. -::::{note} +::::{note} Starting in Curator version 5.7, Curator ignores {{ilm-init}} managed indices. :::: -## Apply policies to existing time series indices [ilm-existing-indices-apply] +## Apply policies to existing time series indices [ilm-existing-indices-apply] The simplest way to transition to managing your periodic indices with {{ilm-init}} is to [configure an index template](configure-lifecycle-policy.md#apply-policy-template) to apply a lifecycle policy to new indices. Once the index you are writing to is being managed by {{ilm-init}}, you can [manually apply a policy](configure-lifecycle-policy.md#apply-policy-multiple) to your older indices. @@ -28,13 +28,13 @@ You can specify different `min_age` values in the policy you use for existing in Once all pre-{{ilm-init}} indices have been aged out and removed, you can delete the policy you used to manage them. -::::{note} +::::{note} If you are using {{beats}} or {{ls}}, enabling {{ilm-init}} in version 7.0 and onward sets up {{ilm-init}} to manage new indices automatically. If you are using {{beats}} through {{ls}}, you might need to change your {{ls}} output configuration and invoke the {{beats}} setup to use {{ilm-init}} for new data. :::: -## Reindex into a managed index [ilm-existing-indices-reindex] +## Reindex into a managed index [ilm-existing-indices-reindex] An alternative to [applying policies to existing indices](#ilm-existing-indices-apply) is to reindex your data into an {{ilm-init}}-managed index. You might want to do this if creating periodic indices with very small amounts of data has led to excessive shard counts, or if continually indexing into the same index has led to large shards and performance issues. @@ -60,10 +60,10 @@ To reindex into the managed index: 1. Check once a minute to see if {{ilm-init}} actions such as rollover need to be performed. -3. Reindex your data using the [reindex API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html). If you want to partition the data in the order in which it was originally indexed, you can run separate reindex requests. +3. Reindex your data using the [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). If you want to partition the data in the order in which it was originally indexed, you can run separate reindex requests. - ::::{important} - Documents retain their original IDs. If you don’t use automatically generated document IDs, and are reindexing from multiple source indices, you might need to do additional processing to ensure that document IDs don’t conflict. One way to do this is to use a [script](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#reindex-scripts) in the reindex call to append the original index name to the document ID. + ::::{important} + Documents retain their original IDs. If you don’t use automatically generated document IDs, and are reindexing from multiple source indices, you might need to do additional processing to ensure that document IDs don’t conflict. One way to do this is to use a [script](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) in the reindex call to append the original index name to the document ID. :::: diff --git a/manage-data/lifecycle/index-lifecycle-management/restore-managed-data-stream-index.md b/manage-data/lifecycle/index-lifecycle-management/restore-managed-data-stream-index.md index d4c8b97ed..dc9ecd07d 100644 --- a/manage-data/lifecycle/index-lifecycle-management/restore-managed-data-stream-index.md +++ b/manage-data/lifecycle/index-lifecycle-management/restore-managed-data-stream-index.md @@ -5,7 +5,7 @@ mapped_pages: # Restore a managed data stream or index [index-lifecycle-and-snapshots] -To [restore](https://www.elastic.co/guide/en/elasticsearch/reference/current/restore-snapshot-api.html) managed indices, ensure that the {{ilm-init}} policies referenced by the indices exist. If necessary, you can restore {{ilm-init}} policies by setting [`include_global_state`](https://www.elastic.co/guide/en/elasticsearch/reference/current/restore-snapshot-api.html#restore-snapshot-api-request-body) to `true`. +To [restore](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-restore) managed indices, ensure that the {{ilm-init}} policies referenced by the indices exist. If necessary, you can restore {{ilm-init}} policies by setting [`include_global_state`](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-restore) to `true`. When you restore a managed index or a data stream with managed backing indices, {{ilm-init}} automatically resumes executing the restored indices' policies. A restored index’s `min_age` is relative to when it was originally created or rolled over, not its restoration time. Policy actions are performed on the same schedule whether or not an index has been restored from a snapshot. If you restore an index that was accidentally deleted half way through its month long lifecycle, it proceeds normally through the last two weeks of its lifecycle. @@ -13,8 +13,8 @@ In some cases, you might want to prevent {{ilm-init}} from immediately executing To prevent {{ilm-init}} from executing a restored index’s policy: -1. Temporarily [stop {{ilm-init}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-stop.html). This pauses execution of *all* {{ilm-init}} policies. +1. Temporarily [stop {{ilm-init}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop). This pauses execution of *all* {{ilm-init}} policies. 2. Restore the snapshot. -3. [Remove the policy](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-remove-policy.html) from the index or perform whatever actions you need to before {{ilm-init}} resumes policy execution. -4. [Restart {{ilm-init}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html) to resume policy execution. +3. [Remove the policy](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-remove-policy) from the index or perform whatever actions you need to before {{ilm-init}} resumes policy execution. +4. [Restart {{ilm-init}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start) to resume policy execution. diff --git a/manage-data/lifecycle/index-lifecycle-management/rollover.md b/manage-data/lifecycle/index-lifecycle-management/rollover.md index a6e085eda..0b18e20ff 100644 --- a/manage-data/lifecycle/index-lifecycle-management/rollover.md +++ b/manage-data/lifecycle/index-lifecycle-management/rollover.md @@ -12,7 +12,7 @@ When indexing time series data like logs or metrics, you can’t write to a sing * Shift older, less frequently accessed data to less expensive *cold* nodes, * Delete data according to your retention policies by removing entire indices. -We recommend using [data streams](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-data-stream.html) to manage time series data. Data streams automatically track the write index while keeping configuration to a minimum. +We recommend using [data streams](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-create-data-stream) to manage time series data. Data streams automatically track the write index while keeping configuration to a minimum. Each data stream requires an [index template](../../data-store/templates.md) that contains: diff --git a/manage-data/lifecycle/index-lifecycle-management/skip-rollover.md b/manage-data/lifecycle/index-lifecycle-management/skip-rollover.md index ce97fb54e..3b2a238f4 100644 --- a/manage-data/lifecycle/index-lifecycle-management/skip-rollover.md +++ b/manage-data/lifecycle/index-lifecycle-management/skip-rollover.md @@ -20,7 +20,7 @@ For example, if you need to change the name of new indices in a series while ret 1. Create a template for the new index pattern that uses the same policy. 2. Bootstrap the initial index. -3. Change the write index for the alias to the bootstrapped index using the [aliases API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html). +3. Change the write index for the alias to the bootstrapped index using the [aliases API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-update-aliases). 4. Set `index.lifecycle.indexing_complete` to `true` on the old index to indicate that it does not need to be rolled over. {{ilm-init}} continues to manage the old index in accordance with your existing policy. New indices are named according to the new template and managed according to the same policy without interruption. diff --git a/manage-data/lifecycle/index-lifecycle-management/start-stop-index-lifecycle-management.md b/manage-data/lifecycle/index-lifecycle-management/start-stop-index-lifecycle-management.md index 9c9bbdc85..4be9e5815 100644 --- a/manage-data/lifecycle/index-lifecycle-management/start-stop-index-lifecycle-management.md +++ b/manage-data/lifecycle/index-lifecycle-management/start-stop-index-lifecycle-management.md @@ -17,7 +17,7 @@ When you stop {{ilm-init}}, [{{slm-init}}](../../../deploy-manage/tools/snapshot ## Get {{ilm-init}} status [get-ilm-status] -To see the current status of the {{ilm-init}} service, use the [Get Status API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-get-status.html): +To see the current status of the {{ilm-init}} service, use the [Get Status API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-get-status): ```console GET _ilm/status @@ -34,7 +34,7 @@ Under normal operation, the response shows {{ilm-init}} is `RUNNING`: ## Stop {{ilm-init}} [stop-ilm] -To stop the {{ilm-init}} service and pause execution of all lifecycle policies, use the [Stop API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-stop.html): +To stop the {{ilm-init}} service and pause execution of all lifecycle policies, use the [Stop API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-stop): ```console POST _ilm/stop @@ -59,7 +59,7 @@ Once all policies are at a safe stopping point, {{ilm-init}} moves into the `STO ## Start {{ilm-init}} [_start_ilm_init] -To restart {{ilm-init}} and resume executing policies, use the [Start API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-start.html). This puts the {{ilm-init}} service in the `RUNNING` state and {{ilm-init}} begins executing policies from where it left off. +To restart {{ilm-init}} and resume executing policies, use the [Start API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-start). This puts the {{ilm-init}} service in the `RUNNING` state and {{ilm-init}} begins executing policies from where it left off. ```console POST _ilm/start diff --git a/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md b/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md index 33756623c..2b2be1e60 100644 --- a/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md +++ b/manage-data/lifecycle/index-lifecycle-management/tutorial-automate-rollover.md @@ -46,7 +46,7 @@ The `min_age` value is relative to the rollover time, not the index creation tim :::: -You can create the policy through {{kib}} or with the [create or update policy](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-put-lifecycle.html) API. To create the policy from {{kib}}, open the menu and go to **Stack Management > Index Lifecycle Policies**. Click **Create policy**. +You can create the policy through {{kib}} or with the [create or update policy](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle) API. To create the policy from {{kib}}, open the menu and go to **Stack Management > Index Lifecycle Policies**. Click **Create policy**. :::{image} ../../../images/elasticsearch-reference-create-policy.png :alt: Create policy page @@ -104,7 +104,7 @@ You can use the {{kib}} Create template wizard to add the template. From Kibana, :alt: Create template page ::: -This wizard invokes the [create or update index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html) to create the index template with the options you specify. +This wizard invokes the [create or update index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template) to create the index template with the options you specify. ::::{dropdown} API example ```console diff --git a/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md b/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md index 349790545..d17936eff 100644 --- a/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md +++ b/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md @@ -28,7 +28,7 @@ You want to send log files to an {{es}} cluster so you can visualize and analyze * Move indices to the warm data tier. * Set replica shards to 1. - * [Force merge](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html) multiple index segments to free up the space used by deleted documents. + * [Force merge](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge) multiple index segments to free up the space used by deleted documents. * Delete indices 90 days after rollover. @@ -117,7 +117,7 @@ The default `logs@lifecycle` policy is designed to prevent the creation of many 3. Click **Save as new policy**. ::::{tip} -Copies of managed {{ilm-init}} policies are also marked as **Managed**. You can use the [Create or update lifecycle policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-put-lifecycle.html) to update the `_meta.managed` parameter to `false`. +Copies of managed {{ilm-init}} policies are also marked as **Managed**. You can use the [Create or update lifecycle policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle) to update the `_meta.managed` parameter to `false`. :::: @@ -126,7 +126,7 @@ Copies of managed {{ilm-init}} policies are also marked as **Managed**. You can To apply your new {{ilm-init}} policy to the `logs` index template, create or edit the `logs@custom` component template. -A `*@custom` component template allows you to customize the mappings and settings of managed index templates, without having to override managed index templates or component templates. This type of component template is automatically picked up by the index template. [Learn more](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html#put-component-template-api-path-params). +A `*@custom` component template allows you to customize the mappings and settings of managed index templates, without having to override managed index templates or component templates. This type of component template is automatically picked up by the index template. [Learn more](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template). 1. Click on the **Component Template** tab and click **Create component template**. 2. Under **Logistics**, name the component template `logs@custom`. diff --git a/manage-data/lifecycle/rollup/getting-started-with-rollups.md b/manage-data/lifecycle/rollup/getting-started-with-rollups.md index 0bfc00425..858bf30fb 100644 --- a/manage-data/lifecycle/rollup/getting-started-with-rollups.md +++ b/manage-data/lifecycle/rollup/getting-started-with-rollups.md @@ -98,7 +98,7 @@ Instead, the {{rollup-features}} save the `count` and `sum` for the defined time :::: -For more details about the job syntax, see [Create {{rollup-jobs}}](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-put-job.html). +For more details about the job syntax, see [Create {{rollup-jobs}}](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-put-job). After you execute the above command and create the job, you’ll receive the following response: @@ -122,7 +122,7 @@ POST _rollup/job/sensor/_start ## Searching the rolled results [_searching_the_rolled_results] -After the job has run and processed some data, we can use the [Rollup search](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-search.html) endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to…​ it just happens to run on the rolled up data instead. +After the job has run and processed some data, we can use the [Rollup search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-rollup-search) endpoint to do some searching. The Rollup feature is designed so that you can use the same Query DSL syntax that you are accustomed to…​ it just happens to run on the rolled up data instead. For example, take this query: diff --git a/manage-data/lifecycle/rollup/rollup-search-limitations.md b/manage-data/lifecycle/rollup/rollup-search-limitations.md index 1bd6e2561..e73248fa2 100644 --- a/manage-data/lifecycle/rollup/rollup-search-limitations.md +++ b/manage-data/lifecycle/rollup/rollup-search-limitations.md @@ -19,7 +19,7 @@ This page highlights the major limitations so that you are aware of them. ## Only one {{rollup}} index per search [_only_one_rollup_index_per_search] -When using the [Rollup search](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-search.html) endpoint, the `index` parameter accepts one or more indices. These can be a mix of regular, non-rollup indices and rollup indices. However, only one rollup index can be specified. The exact list of rules for the `index` parameter are as follows: +When using the [Rollup search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-rollup-search) endpoint, the `index` parameter accepts one or more indices. These can be a mix of regular, non-rollup indices and rollup indices. However, only one rollup index can be specified. The exact list of rules for the `index` parameter are as follows: * At least one index/index-pattern must be specified. This can be either a rollup or non-rollup index. Omitting the index parameter, or using `_all`, is not permitted * Multiple non-rollup indices may be specified @@ -76,7 +76,7 @@ The response will tell you that the field and aggregation were not possible, bec Rollups are stored at a certain granularity, as defined by the `date_histogram` group in the configuration. This means you can only search/aggregate the rollup data with an interval that is greater-than or equal to the configured rollup interval. -For example, if data is rolled up at hourly intervals, the [Rollup search](https://www.elastic.co/guide/en/elasticsearch/reference/current/rollup-search.html) API can aggregate on any time interval hourly or greater. Intervals that are less than an hour will throw an exception, since the data simply doesn’t exist for finer granularities. +For example, if data is rolled up at hourly intervals, the [Rollup search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-rollup-rollup-search) API can aggregate on any time interval hourly or greater. Intervals that are less than an hour will throw an exception, since the data simply doesn’t exist for finer granularities. ::::{admonition} Requests must be multiples of the config :name: rollup-search-limitations-intervals diff --git a/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md b/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md index d211f27ee..23bc50835 100644 --- a/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md +++ b/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md @@ -59,7 +59,7 @@ The `Destination` cluster should be the same or newer version as the `Source` cl ``` ::::{note} - Make sure `reindex.remote.whitelist` is in an array format. All uploaded bundles will be uncompressed into `/app/config/` folder. Ensure the file path corresponds to your uploaded bundle in [Step 1](#ec-remote-reindex-step1). You can optionally set `reindex.ssl.verification_mode` to `full`, `certificate` or `none` depending on the validity of hostname and the certificate path. More details can be found in [reindex](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#reindex-ssl) setting. + Make sure `reindex.remote.whitelist` is in an array format. All uploaded bundles will be uncompressed into `/app/config/` folder. Ensure the file path corresponds to your uploaded bundle in [Step 1](#ec-remote-reindex-step1). You can optionally set `reindex.ssl.verification_mode` to `full`, `certificate` or `none` depending on the validity of hostname and the certificate path. More details can be found in [reindex](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) setting. :::: 3. Click **Back** to the **Edit** page and scroll to the button of the page to **Save** changes. This step will restart all Elasticsearch instances. @@ -87,5 +87,5 @@ POST _reindex ``` ::::{note} -If you have many sources to reindex, it’s is generally better to reindex them one at a time and run them in parallel rather than using a glob pattern to pick up multiple sources. Check [reindex from multiple sources](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html#docs-reindex-from-multiple-sources) for more details. +If you have many sources to reindex, it’s is generally better to reindex them one at a time and run them in parallel rather than using a glob pattern to pick up multiple sources. Check [reindex from multiple sources](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex) for more details. :::: diff --git a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md index 618360bb2..3bc3d7f42 100644 --- a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md +++ b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md @@ -98,7 +98,7 @@ Use any of the following repository types with searchable snapshots: * [Shared filesystems](../deploy-manage/tools/snapshot-and-restore/shared-file-system-repository.md) such as NFS * [Read-only HTTP and HTTPS repositories](../deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md) -You can also use alternative implementations of these repository types, for instance [MinIO](../deploy-manage/tools/snapshot-and-restore/s3-repository.md#repository-s3-client), as long as they are fully compatible. Use the [Repository analysis](https://www.elastic.co/guide/en/elasticsearch/reference/current/repo-analysis-api.html) API to analyze your repository’s suitability for use with searchable snapshots. +You can also use alternative implementations of these repository types, for instance [MinIO](../deploy-manage/tools/snapshot-and-restore/s3-repository.md#repository-s3-client), as long as they are fully compatible. Use the [Repository analysis](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-repository-analyze) API to analyze your repository’s suitability for use with searchable snapshots. :::::: ::::::: @@ -127,7 +127,7 @@ You can customize these policies based on your performance, resilience, and rete To edit a policy in {{kib}}, open the main menu and go to **Stack Management > Index Lifecycle Policies**. Click the policy you’d like to edit. -You can also use the [update lifecycle policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-put-lifecycle.html). +You can also use the [update lifecycle policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle). ```console PUT _ilm/policy/logs @@ -183,7 +183,7 @@ PUT _ilm/policy/logs ::::::{tab-item} Custom application To create a policy in {{kib}}, open the main menu and go to **Stack Management > Index Lifecycle Policies**. Click **Create policy**. -You can also use the [update lifecycle policy API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-put-lifecycle.html). +You can also use the [update lifecycle policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle). ```console PUT _ilm/policy/my-lifecycle-policy @@ -262,7 +262,7 @@ If you’re unsure how to map your fields, use [runtime fields](data-store/mappi To create a component template in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Index Templates** view, click **Create component template**. -You can also use the [create component template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-component-template.html). +You can also use the [create component template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template). ```console # Creates a component template for mappings @@ -314,7 +314,7 @@ Use your component templates to create an index template. Specify: To create an index template in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Index Templates** view, click **Create template**. -You can also use the [create index template API](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-template.html). Include the `data_stream` object to enable data streams. +You can also use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template). Include the `data_stream` object to enable data streams. ```console PUT _index_template/my-index-template @@ -358,7 +358,7 @@ To explore and search your data in {{kib}}, open the main menu and select **Disc Use {{kib}}'s **Dashboard** feature to visualize your data in a chart, table, map, and more. See {{kib}}'s [Dashboard documentation](../explore-analyze/dashboards.md). -You can also search and aggregate your data using the [search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html). Use [runtime fields](data-store/mapping/define-runtime-fields-in-search-request.md) and [grok patterns](../explore-analyze/scripting/grok.md) to dynamically extract data from log messages and other unstructured content at search time. +You can also search and aggregate your data using the [search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-search). Use [runtime fields](data-store/mapping/define-runtime-fields-in-search-request.md) and [grok patterns](../explore-analyze/scripting/grok.md) to dynamically extract data from log messages and other unstructured content at search time. ```console GET my-data-stream/_search @@ -409,7 +409,7 @@ GET my-data-stream/_search } ``` -{{es}} searches are synchronous by default. Searches across frozen data, long time ranges, or large datasets may take longer. Use the [async search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/async-search.html#submit-async-search) to run searches in the background. For more search options, see [*The search API*](../solutions/search/querying-for-search.md). +{{es}} searches are synchronous by default. Searches across frozen data, long time ranges, or large datasets may take longer. Use the [async search API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-async-search-submit) to run searches in the background. For more search options, see [*The search API*](../solutions/search/querying-for-search.md). ```console POST my-data-stream/_async_search