diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
index 36625fd7b..143b05b61 100644
--- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
+++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md
@@ -113,7 +113,7 @@ Using the example text "Elastic is headquartered in Mountain View, California.",
## Add the NER model to an {{infer}} ingest pipeline [ex-ner-ingest]
-You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-processor.html) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/tools/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file.
+You can perform bulk {{infer}} on documents as they are ingested by using an [{{infer}} processor](https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-processor.html) in your ingest pipeline. The novel *Les Misérables* by Victor Hugo is used as an example for {{infer}} in the following example. [Download](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/les-miserables-nd.json) the novel text split by paragraph as a JSON file, then upload it by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Give the new index the name `les-miserables` when uploading the file.
Now create an ingest pipeline either in the [Stack management UI](ml-nlp-inference.md#ml-nlp-inference-processor) or by using the API:
diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
index bed59cf75..a2885dac1 100644
--- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
+++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md
@@ -103,7 +103,7 @@ In this step, you load the data that you later use in an ingest pipeline to get
The data set `msmarco-passagetest2019-top1000` is a subset of the MS MARCO Passage Ranking data set used in the testing stage of the 2019 TREC Deep Learning Track. It contains 200 queries and for each query a list of relevant text passages extracted by a simple information retrieval (IR) system. From that data set, all unique passages with their IDs have been extracted and put into a [tsv file](https://github.com/elastic/stack-docs/blob/8.5/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv), totaling 182469 passages. In the following, this file is used as the example data set.
-Upload the file by using the [Data Visualizer](../../../manage-data/ingest/tools/upload-data-files.md). Name the first column `id` and the second one `text`. The index name is `collection`. After the upload is done, you can see an index named `collection` with 182469 documents.
+Upload the file by using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md). Name the first column `id` and the second one `text`. The index name is `collection`. After the upload is done, you can see an index named `collection` with 182469 documents.
:::{image} ../../../images/machine-learning-ml-nlp-text-emb-data.png
:alt: Importing the data
diff --git a/manage-data/ingest.md b/manage-data/ingest.md
index e48a513fd..facc81f2c 100644
--- a/manage-data/ingest.md
+++ b/manage-data/ingest.md
@@ -28,7 +28,7 @@ Elastic offer tools designed to ingest specific types of general content. The co
* To send **application data** directly to {{es}}, use an [{{es}} language client](https://www.elastic.co/guide/en/elasticsearch/client/index.html).
* To index **web page content**, use the Elastic [web crawler](https://www.elastic.co/web-crawler).
* To sync **data from third-party sources**, use [connectors](https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html). A connector syncs content from an original data source to an {{es}} index. Using connectors you can create *searchable*, read-only replicas of your data sources.
-* To index **single files** for testing in a non-production environment, use the {{kib}} [file uploader](ingest/tools/upload-data-files.md).
+* To index **single files** for testing in a non-production environment, use the {{kib}} [file uploader](ingest/upload-data-files.md).
If you would like to try things out before you add your own data, try using our [sample data](ingest/sample-data.md).
diff --git a/manage-data/ingest/tools.md b/manage-data/ingest/tools.md
index a5477ee44..d26ba6aad 100644
--- a/manage-data/ingest/tools.md
+++ b/manage-data/ingest/tools.md
@@ -18,15 +18,37 @@ mapped_urls:
% Use migrated content from existing pages that map to this page:
-% - [ ] ./raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md
+% - [x] ./raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md
% Notes: These are resources to pull from, but this new "Ingest tools overiew" page will not be a replacement for any of these old AsciiDoc pages. File upload: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#upload-data-kibana https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-file-upload.html API: https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#_add_data_with_programming_languages https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-api.html OpenTelemetry: https://github.com/elastic/opentelemetry Fleet and Agent: https://www.elastic.co/guide/en/fleet/current/fleet-overview.html https://www.elastic.co/guide/en/serverless/current/fleet-and-elastic-agent.html Logstash: https://www.elastic.co/guide/en/logstash/current/introduction.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html https://www.elastic.co/guide/en/serverless/current/logstash-pipelines.html Beats: https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-beats.html APM: /solutions/observability/apps/application-performance-monitoring-apm.md Application logging: https://www.elastic.co/guide/en/observability/current/application-logs.html ECS logging: https://www.elastic.co/guide/en/observability/current/logs-ecs-application.html Elastic serverless forwarder for AWS: https://www.elastic.co/guide/en/esf/current/aws-elastic-serverless-forwarder.html Integrations: https://www.elastic.co/guide/en/integrations/current/introduction.html Search connectors: https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-integrations-connector-client.html Web crawler: https://github.com/elastic/crawler/tree/main/docs
-% - [ ] ./raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md
-% - [ ] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md
-% - [ ] https://www.elastic.co/customer-success/data-ingestion
-% - [ ] https://github.com/elastic/ingest-docs/pull/1373
+% - [This comparison page is being moved to the reference section, so I'm linking to that from the current page - Wajiha] ./raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md
+% - [x] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md
+% - [x] https://www.elastic.co/customer-success/data-ingestion
+% - [x] https://github.com/elastic/ingest-docs/pull/1373
-% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
+% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
+% These IDs are from content that I'm not including on this current page. I've resolved them by changing the internal links to anchor links where needed. - Wajiha
$$$supported-outputs-beats-and-agent$$$
$$$additional-capabilities-beats-and-agent$$$
+
+Depending on the type of data you want to ingest, you have a number of methods and tools available for use in your ingestion process. The table below provides more information about the available tools. Refer to our [Ingestion](/manage-data/ingest.md) overview for some guidelines to help you select the optimal tool for your use case.
+
+
+
+| Tools | Usage | Links to more information |
+| ------- | --------------- | ------------------------- |
+| Integrations | Ingest data using a variety of Elastic integrations. | [Elastic Integrations](https://www.elastic.co/guide/en/integrations/current/index.html) |
+| File upload | Upload data from a file and inspect it before importing it into {{es}}. | [Upload data files](/manage-data/ingest/upload-data-files.md) |
+| APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. | [Document APIs](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html) |
+| OpenTelemetry | Collect and send your telemetry data to Elastic Observability | [Elastic Distributions of OpenTelemetry](https://github.com/elastic/opentelemetry?tab=readme-ov-file#elastic-distributions-of-opentelemetry) |
+| Fleet and Elastic Agent | Add monitoring for logs, metrics, and other types of data to a host using Elastic Agent, and centrally manage it using Fleet. | [Fleet and {{agent}} overview](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html)
[{{fleet}} and {{agent}} restrictions (Serverless)](https://www.elastic.co/guide/en/fleet/current/fleet-agent-serverless-restrictions.html)
[{{beats}} and {{agent}} capabilities](https://www.elastic.co/guide/en/fleet/current/beats-agent-comparison.html)||
+| {{elastic-defend}} | {{elastic-defend}} provides organizations with prevention, detection, and response capabilities with deep visibility for EPP, EDR, SIEM, and Security Analytics use cases across Windows, macOS, and Linux operating systems running on both traditional endpoints and public cloud environments. | [Configure endpoint protection with {{elastic-defend}}](/solutions/security/configure-elastic-defend.md) |
+| {{ls}} | Dynamically unify data from a wide variety of data sources and normalize it into destinations of your choice with {{ls}}. | [Logstash (Serverless)](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html)
[Logstash pipelines](/manage-data/ingest/transform-enrich/logstash-pipelines.md) |
+| {{beats}} | Use {{beats}} data shippers to send operational data to Elasticsearch directly or through Logstash. | [{{beats}} (Serverless)](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-beats.html)
[What are {{beats}}?](https://www.elastic.co/guide/en/beats/libbeat/current/beats-reference.html)
[{{beats}} and {{agent}} capabilities](https://www.elastic.co/guide/en/fleet/current/beats-agent-comparison.html)|
+| APM | Collect detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | [Application performance monitoring (APM)](/solutions/observability/apps/application-performance-monitoring-apm.md) |
+| Application logs | Ingest application logs using Filebeat, {{agent}}, or the APM agent, or reformat application logs into Elastic Common Schema (ECS) logs and then ingest them using Filebeat or {{agent}}. | [Stream application logs](/solutions/observability/logs/stream-application-logs.md)
[ECS formatted application logs](/solutions/observability/logs/ecs-formatted-application-logs.md) |
+| Elastic Serverless forwarder for AWS | Ship logs from your AWS environment to cloud-hosted, self-managed Elastic environments, or {{ls}}. | [Elastic Serverless Forwarder](https://www.elastic.co/guide/en/esf/current/aws-elastic-serverless-forwarder.html) |
+| Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors
+](https://www.elastic.co/guide/en/elasticsearch/reference/current/es-connectors.html)
[Connector clients](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-integrations-connector-client.html) |
+| Web crawler | Discover, extract, and index searchable content from websites and knowledge bases using the web crawler. | [Elastic Open Web Crawler](https://github.com/elastic/crawler#readme) |
\ No newline at end of file
diff --git a/manage-data/ingest/tools/upload-data-files.md b/manage-data/ingest/tools/upload-data-files.md
deleted file mode 100644
index 84f43298b..000000000
--- a/manage-data/ingest/tools/upload-data-files.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-mapped_urls:
- - https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-file-upload.html
- - https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#upload-data-kibana
----
-
-# Upload data files [upload-data-kibana]
-
-% What needs to be done: Align serverless/stateful
-
-% Use migrated content from existing pages that map to this page:
-
-% - [ ] ./raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md
-% - [ ] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md
-
-
-
-% Note from David: I've removed the ID $$$upload-data-kibana$$$ from manage-data/ingest.md as those links should instead point to this page. So, please ensure that the following ID is included on this page. I've added it beside the title.
-
diff --git a/manage-data/ingest/upload-data-files.md b/manage-data/ingest/upload-data-files.md
new file mode 100644
index 000000000..212fe3842
--- /dev/null
+++ b/manage-data/ingest/upload-data-files.md
@@ -0,0 +1,63 @@
+---
+mapped_urls:
+ - https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-file-upload.html
+ - https://www.elastic.co/guide/en/kibana/current/connect-to-elasticsearch.html#upload-data-kibana
+---
+
+# Upload data files [upload-data-kibana]
+
+% What needs to be done: Align serverless/stateful
+
+% Use migrated content from existing pages that map to this page:
+
+% - [x] ./raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-file-upload.md
+% - [x] ./raw-migrated-files/kibana/kibana/connect-to-elasticsearch.md
+
+% Note from David: I've removed the ID $$$upload-data-kibana$$$ from manage-data/ingest.md as those links should instead point to this page. So, please ensure that the following ID is included on this page. I've added it beside the title.
+
+You can upload files, view their fields and metrics, and optionally import them to {{es}} with the Data Visualizer.
+
+To use the Data Visualizer, click **Upload a file** on the {{es}} **Getting Started** page or navigate to the **Integrations** view and search for **Upload a file**. Clicking **Upload a file** opens the Data Visualizer UI.
+
+:::{image} /images/serverless-file-uploader-UI.png
+:alt: File upload UI
+:class: screenshot
+:::
+
+Drag a file into the upload area or click **Select or drag and drop a file** to choose a file from your computer.
+
+You can upload different file formats for analysis with the Data Visualizer:
+
+File formats supported up to 500 MB:
+
+* CSV
+* TSV
+* NDJSON
+* Log files
+
+File formats supported up to 60 MB:
+
+* PDF
+* Microsoft Office files (Word, Excel, PowerPoint)
+* Plain Text (TXT)
+* Rich Text (RTF)
+* Open Document Format (ODF)
+
+The Data Visualizer displays the first 1000 rows of the file. You can inspect the data and make any necessary changes before importing it. Click **Import** continue the process.
+
+This process will create an index and import the data into {{es}}. Once your data is in {{es}}, you can start exploring it, see [Explore and analyze](/explore-analyze/index.md) for more information.
+
+::::{important}
+The upload feature is not intended for use as part of a repeated production process, but rather for the initial exploration of your data.
+
+::::
+
+## Required privileges
+
+The {{stack-security-features}} provide roles and privileges that control which users can upload files. To upload a file in {{kib}} and import it into an {{es}} index, you’ll need:
+
+* `manage_pipeline` or `manage_ingest_pipelines` cluster privilege
+* `create`, `create_index`, `manage`, and `read` index privileges for the index
+* `all` {{kib}} privileges for **Discover** and **Data Views Management**
+
+You can manage your roles, privileges, and spaces in **{{stack-manage-app}}**.
\ No newline at end of file
diff --git a/manage-data/toc.yml b/manage-data/toc.yml
index 62e359f72..1c5e611a7 100644
--- a/manage-data/toc.yml
+++ b/manage-data/toc.yml
@@ -91,6 +91,7 @@ toc:
- file: ingest/ingest-reference-architectures/agent-es-airgapped.md
- file: ingest/ingest-reference-architectures/agent-ls-airgapped.md
- file: ingest/sample-data.md
+ - file: ingest/upload-data-files.md
- file: ingest/transform-enrich.md
children:
- file: ingest/transform-enrich/ingest-pipelines-serverless.md
@@ -106,8 +107,6 @@ toc:
- file: ingest/transform-enrich/example-enrich-data-by-matching-value-to-range.md
- file: ingest/transform-enrich/index-mapping-text-analysis.md
- file: ingest/tools.md
- children:
- - file: ingest/tools/upload-data-files.md
- file: lifecycle.md
children:
- file: lifecycle/data-tiers.md
diff --git a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md
index 1486e3d7b..0401d8551 100644
--- a/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md
+++ b/raw-migrated-files/elasticsearch/elasticsearch-reference/semantic-search-inference.md
@@ -824,7 +824,7 @@ In this step, you load the data that you later use in the {{infer}} ingest pipel
Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv).
-Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/tools/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
+Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
## Ingest the data through the {{infer}} ingest pipeline [reindexing-data-infer]
diff --git a/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md b/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md
index 0a5d7e0d9..4acf7eaf5 100644
--- a/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md
+++ b/raw-migrated-files/ingest-docs/fleet/beats-agent-comparison.md
@@ -23,8 +23,8 @@ This article summarizes the features and functionality you need to be aware of b
The following steps will help you determine if {{agent}} can support your use case:
1. Determine if the integrations you need are supported and Generally Available (GA) on {{agent}}. To find out if an integration is GA, see the [{{integrations}} quick reference table](https://docs.elastic.co/en/integrations/all_integrations).
-2. If the integration is available, check [Supported outputs](../../../manage-data/ingest/tools.md#supported-outputs-beats-and-agent) to see whether the required output is also supported.
-3. Review [Capabilities comparison](../../../manage-data/ingest/tools.md#additional-capabilities-beats-and-agent) to determine if any features required by your deployment are supported. {{agent}} should support most of the features available on {{beats}} and is updated for each release.
+2. If the integration is available, check [Supported outputs](#supported-outputs-beats-and-agent) to see whether the required output is also supported.
+3. Review [Capabilities comparison](#additional-capabilities-beats-and-agent) to determine if any features required by your deployment are supported. {{agent}} should support most of the features available on {{beats}} and is updated for each release.
If you are satisfied with all three steps, then {{agent}} is suitable for your deployment. However, if any steps fail your assessment, you should continue using {{beats}}, and review future updates or contact us in the [discuss forum](https://discuss.elastic.co/).
@@ -67,7 +67,7 @@ The following table shows the outputs supported by the {{agent}} in 9.0.0-beta1:
| [Project paths](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-path.html) | {{agent}} configures these paths to provide a simpler and more streamlinedconfiguration experience. |
| [External configuration file loading](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-reloading.html) | Config is distributed via policy. |
| [Live reloading](https://www.elastic.co/guide/en/beats/filebeat/current/_live_reloading.html) | Related to the config file reload. |
-| [Outputs](https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html) | Configured through {{fleet}}. See [Supported outputs](../../../manage-data/ingest/tools.md#supported-outputs-beats-and-agent). |
+| [Outputs](https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html) | Configured through {{fleet}}. See [Supported outputs](#supported-outputs-beats-and-agent). |
| [SSL](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html) | Supported |
| [{{ilm-cap}}](https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html) | Enabled by default although the Agent uses [data streams](https://www.elastic.co/guide/en/fleet/current/data-streams.html). |
| [{{es}} index template loading](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html) | No longer applicable |
diff --git a/solutions/search/hybrid-semantic-text.md b/solutions/search/hybrid-semantic-text.md
index 7f906f6a0..1b7670c7e 100644
--- a/solutions/search/hybrid-semantic-text.md
+++ b/solutions/search/hybrid-semantic-text.md
@@ -56,7 +56,7 @@ In this step, you load the data that you later use to create embeddings from.
Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv).
-Download the file and upload it to your cluster using the [Data Visualizer](../../manage-data/ingest/tools/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
+Download the file and upload it to your cluster using the [Data Visualizer](../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
## Reindex the data for hybrid search [hybrid-search-reindex-data]
diff --git a/solutions/search/ingest-for-search.md b/solutions/search/ingest-for-search.md
index 507225594..14659c8c3 100644
--- a/solutions/search/ingest-for-search.md
+++ b/solutions/search/ingest-for-search.md
@@ -41,7 +41,7 @@ You can use these specialized tools to add general content to {{es}} indices.
|--------|-------------|-------|
| [**Web crawler**](https://github.com/elastic/crawler) | Programmatically discover and index content from websites and knowledge bases | Crawl public-facing web content or internal sites accessible via HTTP proxy |
| [**Search connectors**](https://github.com/elastic/connectors) | Third-party integrations to popular content sources like databases, cloud storage, and business applications | Choose from a range of Elastic-built connectors or build your own in Python using the Elastic connector framework|
-| [**File upload**](/manage-data/ingest/tools/upload-data-files.md)| One-off manual uploads through the UI | Useful for testing or very small-scale use cases, but not recommended for production workflows |
+| [**File upload**](/manage-data/ingest/upload-data-files.md)| One-off manual uploads through the UI | Useful for testing or very small-scale use cases, but not recommended for production workflows |
### Process data at ingest time
diff --git a/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md b/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md
index 5a7689eeb..528477218 100644
--- a/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md
+++ b/solutions/search/semantic-search/semantic-search-elser-ingest-pipelines.md
@@ -102,7 +102,7 @@ The `msmarco-passagetest2019-top1000` dataset was not utilized to train the mode
::::
-Download the file and upload it to your cluster using the [File Uploader](../../../manage-data/ingest/tools/upload-data-files.md) in the UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
+Download the file and upload it to your cluster using the [File Uploader](../../../manage-data/ingest/upload-data-files.md) in the UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
### Ingest the data through the {{infer}} ingest pipeline [reindexing-data-elser]
diff --git a/solutions/search/semantic-search/semantic-search-inference.md b/solutions/search/semantic-search/semantic-search-inference.md
index fe0335eb7..2f5d61066 100644
--- a/solutions/search/semantic-search/semantic-search-inference.md
+++ b/solutions/search/semantic-search/semantic-search-inference.md
@@ -829,7 +829,7 @@ In this step, you load the data that you later use in the {{infer}} ingest pipel
Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv).
-Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/tools/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
+Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
## Ingest the data through the {{infer}} ingest pipeline [reindexing-data-infer]
diff --git a/solutions/search/semantic-search/semantic-search-semantic-text.md b/solutions/search/semantic-search/semantic-search-semantic-text.md
index 896d77500..6c968fb40 100644
--- a/solutions/search/semantic-search/semantic-search-semantic-text.md
+++ b/solutions/search/semantic-search/semantic-search-semantic-text.md
@@ -63,7 +63,7 @@ In this step, you load the data that you later use to create embeddings from it.
Use the `msmarco-passagetest2019-top1000` data set, which is a subset of the MS MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by a list of relevant text passages. All unique passages, along with their IDs, have been extracted from that data set and compiled into a [tsv file](https://github.com/elastic/stack-docs/blob/main/docs/en/stack/ml/nlp/data/msmarco-passagetest2019-unique.tsv).
-Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/tools/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
+Download the file and upload it to your cluster using the [Data Visualizer](../../../manage-data/ingest/upload-data-files.md) in the {{ml-app}} UI. After your data is analyzed, click **Override settings**. Under **Edit field names***, assign `id` to the first column and `content` to the second. Click ***Apply***, then ***Import**. Name the index `test-data`, and click **Import**. After the upload is complete, you will see an index named `test-data` with 182,469 documents.
## Reindex the data [semantic-text-reindex-data]
diff --git a/solutions/search/serverless-elasticsearch-get-started.md b/solutions/search/serverless-elasticsearch-get-started.md
index b6463e8a4..c71dd2845 100644
--- a/solutions/search/serverless-elasticsearch-get-started.md
+++ b/solutions/search/serverless-elasticsearch-get-started.md
@@ -109,7 +109,7 @@ If you’re already familiar with Elasticsearch, you can jump right into setting
* [{{es}} API](ingest-for-search.md)
* [Connector clients](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-integrations-connector-client.html)
- * [File Uploader](../../manage-data/ingest/tools/upload-data-files.md)
+ * [File Uploader](../../manage-data/ingest/upload-data-files.md)
* [{{beats}}](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-beats.html)
* [{{ls}}](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html)
* [Elastic Open Web Crawler](https://github.com/elastic/crawler)